This quickstart shows you how to deploy an application to LangSmith Cloud using theDocumentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
langgraph deploy command.
The
langgraph deploy command is in beta.Prerequisites
Before you begin, ensure you have:- A LangSmith account on the Plus plan or above and an API key.
-
Docker installed and running. Verify with
docker ps. -
On Apple Silicon (M1/M2/M3): Docker Buildx for cross-compiling to
linux/amd64. -
The LangGraph CLI:
1. Create a LangGraph app
Create a new app from thenew-langgraph-project-python template:
2. Set your API key
Add your LangSmith API key to a.env file in your project root:
langgraph deploy command reads this automatically. Alternatively, pass it inline:
3. Deploy
Run the deploy command from your project directory:dev deployment named after your project directory by default. Use --name or --deployment-type prod to override.
You can also use langgraph deploy list to see all deployments, langgraph deploy logs to tail runtime logs, and langgraph deploy delete <ID> to remove a deployment. For details, refer to the CLI reference.
4. Test in Studio
Studio is an interactive agent IDE connected directly to your deployment. Use it to send messages, inspect intermediate state at each node, edit state mid-run, and replay from any prior checkpoint without writing code. Once the deployment is ready:- Go to LangSmith and select Deployments in the left sidebar.
- Select your deployment to view its details.
- Click Studio in the top right corner to open Studio.
5. Test the API
Copy the API URL from the deployment details view, then use it to call your application:- Python SDK (Async)
- Python SDK (Sync)
- JavaScript SDK
- Rest API
- Install the LangGraph Python SDK:
- Send a message to the assistant (stateless run):
Next steps
Assistants
Deploy the same graph with different models, prompts, or tools per assistant.
Threads
Persist state across multiple runs so your agent remembers context between interactions.
Runs
Kick off background runs for long-running jobs and stream results back to your client.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

