What is the Lightrun MCP?π
Lightrun MCP lets your AI coding assistant (Cursor, Gemini, Copilot, and others) use Lightrun's runtime context on your behalf.
Built on the Model Context Protocol (MCP) standard, it can connect with most AI assistants and agents on the market.
Here's a short introduction video to Lightrun MCP:
Why use Lightrun MCP when you already use the IDE plugin?π
Lightrun MCP keeps the full power of the Lightrun IDE plugin while adding natural-language workflows and AI-driven runtime investigation.
All the Lightrun power - right where you codeπ
- Inspect expression values, call stacks, and code metrics in live runtimes.
- Production-safe, on-demand access to runtime data.
- No code changes. No rebuilds. No redeployments.
Less configuration. More conversation - in your native languageπ
- No forms, syntax, or variable names to remember.
Describe what you need in natural language, and the AI handles the rest.
- No need to remember agent pools, tags, or agent names.
Let the AI discover active services and recommend targets to you.
- Unsure which condition or expression to use?
The AI refines parameters iteratively until the desired result is reached β while you focus on coding.
AI-driven investigation and analysisπ
- Runtime data often proves relevant in unexpected places. The AI suggests investigations, surfaces relevant use cases, and expands how you inspect your code.
- You donβt have to analyze the results manually.
It analyzes the results, validates hypotheses, and provides actionable recommendations.
Not convinced? Try it.π
The MCP quickstart guide walks you through a 4-step flow in about 6 minutes.
Lightrun MCP architectureπ
The Lightrun MCP architecture follows the standard MCP clientβserver model and consists of the following components:
MCP Clientπ
The MCP client is embedded in AI coding tools (such as Cursor). It issues MCP requests to inspect application state and request runtime data.
MCP Server (Lightrun MCP Server)π
The Lightrun MCP Server is a component of the Lightrun Server and implements the MCP interface, translating MCP requests into Lightrun API calls.
In this architecture, an AI coding tool acts as an MCP client and communicates with the Lightrun Server using MCP. The server mediates all interactions with the application runtime, ensuring that debugging and observability operations are executed safely and without requiring code changes or redeployments.
The following diagram shows the overall Lightrun MCP architecture and the interaction between MCP clients, the Lightrun Server, and the application runtime.

Security and privacyπ
Data protectionπ
MCP results are subject to PII redaction based on your PII redaction configuration.
Authenticationπ
Authentication is required to access Lightrun MCP. MCP clients authenticate using Lightrun Server credentials, and access is governed by existing Lightrun roles and access permissions. Authorization is enforced on tool usage and not on the connection to the Lightrun MCP Server.
Rules and limitsπ
The following rules and limits apply to the results returned to AI agents when using Lightrun MCP:
- A maximum of 50 runtime inspection hits is allowed per request. The default value is 1.
- Inspected objects are limited to a depth of three nesting levels.
- The default timeout is 60 seconds with a maximum timeout of 10 minutes.
- Quota limits cannot be ignored or bypassed.
Legal disclaimerπ
The Lightrun MCP exposes application code and runtime data to MCP clients for inspection and debugging. Avoid sharing sensitive information with MCP clients you do not trust.
Getting startedπ
To start using Lightrun MCP, read the following topics:
- MCP quickstart guide: Connect an AI assistant to Lightrun MCP and perform an initial runtime inspection.
- Lightrun MCP tools: Reference documentation for the tools exposed by Lightrun MCP.