Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

LangSmith Deployment is a workflow orchestration runtime purpose-built for agent workloads. It provides the managed infrastructure agents need to run reliably in production at scale, supporting the full lifecycle from local development to deployment.

Deployable products

LangSmith Deployment is framework-agnostic which means you can deploy agents built with:

Deep Agents

Use the Deep Agents CLI to deploy a deep agent to LangSmith Cloud.

LangGraph (and LangChain)

Use the LangGraph CLI and app templates to deploy a LangGraph application to LangSmith.

Other frameworks

Use the LangGraph Functional API to deploy Strands, CrewAI, and other agent frameworks.

Deployment environments

You can run the same Agent Server runtime in several hosting models. A standalone server is the lightest option: you run containers yourself without the LangSmith control plane. For managed deployments through the UI and APIs, use Cloud or Self-hosted (full platform in your infrastructure).

Cloud

Fully managed by LangChain. Create deployments from GitHub in the LangSmith UI or with langgraph deploy. Requires a Plus plan or above.

Standalone server

Deploy Agent Server with Docker, Compose, or Kubernetes. Bring your own PostgreSQL, Redis, and LangSmith license; no control plane. Optional LangSmith tracing to Cloud or a self-hosted instance.

Self-hosted

Run the full LangSmith platform, including the control plane and data plane, in your cloud (for example on Kubernetes). Requires Enterprise plan. Integrates observability, evaluation, and agent deployment in one private stack.
Same runtime, same APIs. What changes is who manages the infrastructure. For a feature-level comparison and infrastructure setup, see Platform setup.

Deployment capabilities

Once an agent is deployed, you work with Agent Server’s execution model: assistants for configuration, threads for state, and runs for workloads.

Core capabilities

Stream to users, pause for human review, handle concurrent input, and connect via MCP and A2A.

Studio

Use an interactive environment for developing and debugging agents.

Advanced configuration

Authentication, encryption, custom routes, and short- and long-term memory stores.

Agent composition

RemoteGraph lets any agent call other deployed agents with MCP and A2A.

Reference & operations

Tutorials

Securing and customizing your server

Operations