Skip to main content
This quickstart shows you how to deploy an application to LangSmith Cloud using the langgraph deploy command.
For a comprehensive Cloud deployment guide including GitHub-based deployments and all configuration options, refer to the Cloud deployment setup guide.
The langgraph deploy command is in beta.

Prerequisites

Before you begin, ensure you have:

1. Create a LangGraph app

Create a new app from the new-langgraph-project-python template:
langgraph new path/to/your/app --template new-langgraph-project-python
cd path/to/your/app
Run langgraph new without --template for an interactive menu of available templates.

2. Set your API key

Add your LangSmith API key to a .env file in your project root:
LANGSMITH_API_KEY=lsv2_...
The langgraph deploy command reads this automatically. Alternatively, pass it inline:
LANGSMITH_API_KEY=lsv2_... langgraph deploy

3. Deploy

Run the deploy command from your project directory:
langgraph deploy
This creates a dev deployment named after your project directory by default. Use --name or --deployment-type prod to override.
To update an existing deployment after making code changes, re-run langgraph deploy. It finds the existing deployment by name and updates it in place.
You can also use langgraph deploy list to see all deployments, langgraph deploy logs to tail runtime logs, and langgraph deploy delete <ID> to remove a deployment. For details, refer to the CLI reference.

4. Test in Studio

Studio is an interactive agent IDE connected directly to your deployment. Use it to send messages, inspect intermediate state at each node, edit state mid-run, and replay from any prior checkpoint without writing code. Once the deployment is ready:
  1. Go to LangSmith and select Deployments in the left sidebar.
  2. Select your deployment to view its details.
  3. Click Studio in the top right corner to open Studio.

5. Test the API

Copy the API URL from the deployment details view, then use it to call your application:
  1. Install the LangGraph Python SDK:
    pip install langgraph-sdk
    
  2. Send a message to the assistant (stateless run):
    from langgraph_sdk import get_client
    
    client = get_client(url="your-deployment-url", api_key="your-langsmith-api-key")
    
    async for chunk in client.runs.stream(
        None,  # Threadless run
        "agent", # Name of assistant. Defined in langgraph.json.
        input={
            "messages": [{
                "role": "human",
                "content": "What is LangGraph?",
            }],
        },
        stream_mode="updates",
    ):
        print(f"Receiving new event of type: {chunk.event}...")
        print(chunk.data)
        print("\n\n")
    

Next steps

Assistants

Deploy the same graph with different models, prompts, or tools per assistant.

Threads

Persist state across multiple runs so your agent remembers context between interactions.

Runs

Kick off background runs for long-running jobs and stream results back to your client.