You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.
This guide will walk you through how to use LangGraph Studio to visualize, interact, and debug your agent locally. LangGraph Studio is our free-to-use, powerful agent IDE that integrates with LangSmith to enable tracing, evaluation, and prompt engineering. See exactly how your agent thinks, trace every decision, and ship smarter, more reliable agents.

Prerequisites

Before you begin, ensure you have the following:

Setup local LangGraph server

1. Install the LangGraph CLI

# Python >= 3.11 is required.
pip install --upgrade "langgraph-cli[inmem]"

2. Prepare your agent

We’ll use the following simple agent as an example:
agent.py
from langchain.agents import create_agent

model = ChatOpenAI(model="gpt-4o")

def send_email(to: str, subject: str, body: str):
    """Send an email"""
    email = {
        "to": to,
        "subject": subject, 
        "body": body
    }
    # ... email sending logic

    return f"Email sent to {to}"

agent = create_agent(
    "openai:gpt-4o",
    tools=[send_email],
    prompt="You are an email assistant. Always use the send_email tool.",
)

3. Environment variables

Create a .env file in the root of your project and fill in the necessary API keys. We’ll need to set the LANGSMITH_API_KEY environment variable to the API key you get from LangSmith.
Be sure not to commit your .env to version control systems such as Git!
.env
LANGSMITH_API_KEY=lsv2...

4. Make your app LangGraph-compatible

Inside your app’s directory, create a configuration file langgraph.json:
langgraph.json
{
  "dependencies": ["."],
  "graphs": {
    "agent": "./src/agent.py:agent"
  },
  "env": ".env"
}
create_agent() automatically returns a compiled LangGraph graph that we can pass to the graphs key in our configuration file.
See the LangGraph configuration file reference for detailed explanations of each key in the JSON object of the configuration file.
So far, our project structure looks like this:
my-app/
├── src
   └── agent.py
├── .env
└── langgraph.json

5. Install dependencies

In the root of your new LangGraph app, install the dependencies:
pip install -e .

6. View your agent in Studio

Start your LangGraph server:
langgraph dev
Safari blocks localhost connections to Studio. To work around this, run the above command with --tunnel to access Studio via a secure tunnel.
Your agent will be accessible via API (http://127.0.0.1:2024) and the Studio UI https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024:
Agent view in LangGraph studio UI
Studio makes each step of your agent easily observable. Replay any input and inspect the exact prompt, tool arguments, return values, and token/latency metrics. If a tool throws an exception, Studio records it with surrounding state so you can spend less time debugging. Keep your dev server running, edit prompts or tool signatures, and watch Studio hot-reload. Re-run the conversation thread from any step to verify behavior changes. See Manage threads for more details. As your agent grows, the same view scales from a single-tool demo to multi-node graphs, keeping decisions legible and reproducible.
For an in-depth look at LangGraph Studio, check out our comprehensive LangGraph Studio overview.