Lightrun tagging best practices๐
Designing a consistent and scalable tagging strategy ensures that Lightrun actions reliably target the correct agents across services, environments, and deployments. The following recommendations will help you establish a tagging taxonomy that is robust, predictable, and easy for teams to adopt. For an introduction to how tags work in Lightrun, see Tags Overview.
Core tag categories๐
When designing a tagging strategy, it helps to think in terms of the four attributes that most meaningfully describe where and how an agent operates. These attributes can be represented as individual tags or combined into compound tags, depending on the scale and needs of your deployment.
Component name
Identifies the microservice or subsystem the agent represents.
Examples: CatalogManager, Billing, CheckoutService.
Version
Indicates the version of the code running. Developers must use the same version in their IDE to debug correctly.
Examples: 1.3, build-8421, v2.0.1
Deployment environment
Describes where the agent is running.
Examples: Dev, QA, Staging, Production.
Audience / ownership
Useful when deployments differ by customer, team, or business domain.
Examples: Walmart, jsonTeam-Fulfillment, RetailTenantA
Naming conventions๐
A consistent naming scheme improves discoverability and reduces confusion.
Recommended practice๐
Use kebab-case or PascalCase
Examples: checkout-service, OrderProcessor, team-fulfillment.
Use clear, descriptive names
Avoid abbreviations unless widely understood internally.
Use consistent environment names
Examples: prod, staging, dev
Avoid
- Ambiguous or overloaded tags such as
test,service,node.
- Tags that duplicate the displayName
Use compound tags for precision at scale๐
For medium and large deployments, combining attributes into a single compound tag provides precise, predictable targeting.
Example:`
CatalogManager_1.3_Staging_Walmart
Without compound tags, developers may target broad categories such as Staging, unintentionally sending actions to dozens or hundreds of agentsโmany of which may not contain the relevant source file.
This can result in:
- Unnecessary load.
- Error responses from unrelated agents.
- Slower debugging.
- Noise in the IDE.
- Compound tags eliminate accidental over-targeting.
If compound tags are not practical
If you operate a large matrix of components ร versions ร environments, creating every possible compound tag may be unmanageable. In these cases, encourage developers to:
- Use Custom Sources to combine tags ad hoc.
- Combine environment, component, and version tags only when needed.
- Apply Conditions to limit the number of responding agents.
Keep tag groups reasonably small
To avoid overwhelming the server or developers:
- Aim to keep each tag associated with 50 agents or fewer.
- Developers rarely need more than 50 simultaneous snapshot responses.
- Larger groups increase noise, latency, and UI clutter.
If a tag must represent a large pool, use naming clues such as:
VLG_Staging(very large group)LG_Frontend(large group)
Train teams to:
- Use Conditions (for example, filter by
hostname,region, orversion).
- Combine multiple tags into a Custom Source for more precise targeting.
Example:
LG_Frontend + v5 + Production
Adopt cross-team conventions๐
Adopt shared conventions across your engineering organization to maintain consistency at scale.
Recommended:
- Define an approved tag list per team or domain
LG_Frontend+v5 + Production
- Document examples (for example, services, environments, roles)
- Review tags periodically to avoid tag sprawl
- Provide templates for new services
A consistent taxonomy ensures that tags scale with the systemโnot against it.
Example metadata configuration๐
{
"registration": {
"displayName": "checkout-service-12",
"tags": [
{ "name": "CheckoutService" },
{ "name": "Production" },
{ "name": "v1.3.2" }
]
}
}
Next steps๐
Once you have established a tagging strategy, you can configure tags for your specific runtime environment using the dedicated agent guides: