Skip to content

What is the Lightrun MCP?πŸ”—

Lightrun MCP lets your AI coding assistant (Cursor, Gemini, Copilot, and others) use Lightrun's runtime context on your behalf.
Built on the Model Context Protocol (MCP) standard, it can connect with most AI assistants and agents on the market.

Here's a short introduction video to Lightrun MCP:

Why use Lightrun MCP when you already use the IDE plugin?πŸ”—

Lightrun MCP keeps the full power of the Lightrun IDE plugin while adding natural-language workflows and AI-driven runtime investigation.

All the Lightrun power - right where you codeπŸ”—

  • Inspect expression values, call stacks, and code metrics in live runtimes.
  • Production-safe, on-demand access to runtime data.
  • No code changes. No rebuilds. No redeployments.

Less configuration. More conversation - in your native languageπŸ”—

  • No forms, syntax, or variable names to remember.
    Describe what you need in natural language, and the AI handles the rest.
  • No need to remember agent pools, tags, or agent names.
    Let the AI discover active services and recommend targets to you.
  • Unsure which condition or expression to use?
    The AI refines parameters iteratively until the desired result is reached β€” while you focus on coding.

AI-driven investigation and analysisπŸ”—

  • Runtime data often proves relevant in unexpected places. The AI suggests investigations, surfaces relevant use cases, and expands how you inspect your code.
  • You don’t have to analyze the results manually.
    It analyzes the results, validates hypotheses, and provides actionable recommendations.

Not convinced? Try it.πŸ”—

The MCP quickstart guide walks you through a 4-step flow in about 6 minutes.

Lightrun MCP architectureπŸ”—

The Lightrun MCP architecture follows the standard MCP client–server model and consists of the following components:

MCP ClientπŸ”—

The MCP client is embedded in AI coding tools (such as Cursor). It issues MCP requests to inspect application state and request runtime data.

MCP Server (Lightrun MCP Server)πŸ”—

The Lightrun MCP Server is a component of the Lightrun Server and implements the MCP interface, translating MCP requests into Lightrun API calls.

In this architecture, an AI coding tool acts as an MCP client and communicates with the Lightrun Server using MCP. The server mediates all interactions with the application runtime, ensuring that debugging and observability operations are executed safely and without requiring code changes or redeployments.

The following diagram shows the overall Lightrun MCP architecture and the interaction between MCP clients, the Lightrun Server, and the application runtime.

Lightrun MCP architecture

Security and privacyπŸ”—

Data protectionπŸ”—

MCP results are subject to PII redaction based on your PII redaction configuration.

AuthenticationπŸ”—

Authentication is required to access Lightrun MCP. MCP clients authenticate using Lightrun Server credentials, and access is governed by existing Lightrun roles and access permissions. Authorization is enforced on tool usage and not on the connection to the Lightrun MCP Server.

Rules and limitsπŸ”—

The following rules and limits apply to the results returned to AI agents when using Lightrun MCP:

  • A maximum of 50 runtime inspection hits is allowed per request. The default value is 1.
  • Inspected objects are limited to a depth of three nesting levels.
  • The default timeout is 60 seconds with a maximum timeout of 10 minutes.
  • Quota limits cannot be ignored or bypassed.

The Lightrun MCP exposes application code and runtime data to MCP clients for inspection and debugging. Avoid sharing sensitive information with MCP clients you do not trust.

Getting startedπŸ”—

To start using Lightrun MCP, read the following topics:


Last update: March 17, 2026