Skip to content

Lightrun AI Skills🔗

Lightrun AI Skills are reusable workflows for AI coding assistants. They guide an assistant through a focused investigation using Lightrun MCP, so the assistant can collect runtime evidence from live applications before producing a diagnosis or fix recommendation.

Skills are distributed from the lightrun-ai repository. The repository contains skill folders, plugin metadata for supported AI clients, and an MCP configuration that points to the Lightrun MCP server.

All current and future Lightrun AI Skills use the same installation flow. Install the skill package once, then use the specific skill that matches the investigation you want to run.

When to use Lightrun AI Skills🔗

Use Lightrun AI Skills when you want an AI assistant to investigate an issue with real runtime evidence instead of relying only on logs, traces, or static code analysis.

Lightrun AI Skills are useful for:

  • Debugging issues that are difficult to reproduce locally.
  • Investigating production or staging behavior without redeploying code.
  • Validating root-cause hypotheses with live expression values, call stacks, execution counts, durations, or custom metrics.
  • Producing a structured handoff that separates runtime facts from inferred conclusions.

Requirements🔗

Before using a Lightrun AI Skill, ensure that:

  • You have access to a Lightrun organization.
  • You have access to the application or service you want to inspect.
  • Lightrun MCP is configured for your AI assistant.
  • OAuth authorization for Lightrun MCP is complete.
  • At least one valid Lightrun runtime source is available to your user.

For MCP setup instructions, see the MCP quickstart guide.

Available skills🔗

The first available skill is the Live Runtime Debugging Skill.

How skills relate to MCP🔗

Lightrun MCP exposes the runtime tools. Lightrun AI Skills define how an AI assistant should use those tools during an investigation.

For example, MCP provides tools for discovering runtime sources and inspecting live application behavior. A skill tells the assistant to start with source discovery, choose the right runtime target, collect evidence for specific hypotheses, and explain the final diagnosis with confidence and remaining uncertainty.

Install the skill package🔗

The recommended installation path depends on your AI client.

Cursor🔗

  1. Open Cursor.
  2. Install or import a plugin from a repository URL.
  3. Use the following repository URL:

    https://github.com/lightrun-platform/lightrun-ai
    
  4. Confirm that the plugin loads the Lightrun skill package.

  5. Confirm that the Lightrun MCP server is configured and authenticated.

The repository includes Cursor plugin metadata in .cursor-plugin/plugin.json and a Cursor MCP configuration in .cursor/mcp.json.

MCP endpoint

The repository's default Cursor MCP configuration points to the Lightrun SaaS MCP endpoint, https://app.lightrun.com/mcp. If you use a single-tenant or on-premises deployment, configure your AI client with the MCP endpoint shown in your Lightrun Management Portal.

Claude🔗

The repository includes Claude plugin metadata in .claude-plugin/plugin.json and .claude-plugin/marketplace.json.

Use your Claude plugin or marketplace installation flow to install the package from:

https://github.com/lightrun-platform/lightrun-ai

After installation, confirm that Lightrun MCP is configured and OAuth authorization is complete.

Codex🔗

Codex loads skills from .agents/skills locations.

  1. Copy or symlink the skill folder from the repository's skills/ directory into one of the following locations:

    $REPO_ROOT/.agents/skills
    $HOME/.agents/skills
    
  2. Restart or reload Codex if needed.

  3. Confirm that Lightrun MCP is configured and OAuth authorization is complete.

Last update: May 4, 2026