- Open source harness: MIT licensed, available for Python and TypeScript
- AGENTS.md: open standard for agent instructions
- Agent Skills: open standard for agent knowledge and actions
- Any model, any sandbox: no provider lock-in
- Open protocols: MCP, A2A, Agent Protocol
- Self-hostable: LangSmith Deployments can be self-hosted so memory stays in your infrastructure
Compare to Claude Managed Agents
| Deep Agents Deploy | Claude Managed Agents | |
|---|---|---|
| Model support | OpenAI, Anthropic, Google, Bedrock, Azure, Fireworks, Baseten, OpenRouter, many more | Anthropic only |
| Harness | Open source (MIT) | Proprietary, closed source |
| Sandbox | LangSmith, Daytona, Modal, Runloop, or custom | Built in |
| MCP support | ✅ | ✅ |
| Skill support | ✅ | ✅ |
| AGENTS.md support | ✅ | ❌ |
| Agent endpoints | MCP, A2A, Agent Protocol | Proprietary |
| Self hosting | ✅ | ❌ |
What you’re deploying
deepagents deploy packages your agent configuration and deploys it as a LangSmith Deployment. You configure your agent with a few parameters:
| Parameter | Description |
|---|---|
model | The LLM to use. Any provider works — see supported models. |
AGENTS.md | The system prompt, loaded at the start of each session. |
skills | Agent Skills for specialized knowledge and actions. Skills are synced into the sandbox so the agent can execute them at runtime. See skills docs. |
mcp.json | MCP tools (HTTP/SSE). See MCP docs. |
sandbox | Optional execution environment. See sandbox providers. |
Usage
deepagents deploy looks for deepagents.toml in the current directory. Pass --config to use a different path:
deepagents init
Scaffold a new agent project:
| File | Purpose |
|---|---|
deepagents.toml | Agent config — name, model, optional sandbox |
AGENTS.md | System prompt loaded at session start |
.env | API key template (ANTHROPIC_API_KEY, LANGSMITH_API_KEY, etc.) |
mcp.json | MCP server configuration (empty by default) |
skills/ | Directory for Agent Skills, with an example review skill |
AGENTS.md with your agent’s instructions and run deepagents deploy.
Project layout
The deploy command uses a convention-based project layout. Place the following files alongside yourdeepagents.toml and they are automatically discovered:
| File/directory | Purpose | Required |
|---|---|---|
AGENTS.md | Memory for the agent. Provides persistent context (project conventions, instructions, preferences) that is always loaded at startup. | Yes |
skills/ | Directory of skill definitions. Each subdirectory should contain a SKILL.md file. | No |
mcp.json | MCP server configuration. Only http and sse transports are supported in deployed contexts. | No |
.env | Environment variables (API keys, secrets). Placed alongside deepagents.toml at the project root. | No |
Configuration file
deepagents.toml configures the agent’s identity and sandbox environment. Only the [agent] section is required. The [sandbox] section is optional and defaults to no sandbox.
[agent]
(Required)
Core agent identity. For more on model selection and provider configuration, see supported models.
Name for the deployed agent. Used as the assistant identifier in LangSmith.
Model identifier in
provider:model format. See supported models.deepagents.toml
The
name field is the only required value in the entire configuration file. Everything else has defaults.deepagents.toml:
- Skills: the bundler recursively scans
skills/, skipping hidden dotfiles, and bundles the rest. - MCP servers: if
mcp.jsonexists, it is included in the deployment andlangchain-mcp-adaptersis added as a dependency. Only HTTP/SSE transports are supported (stdio is rejected at bundle time). - Model dependencies: the
provider:prefix in themodelfield determines the requiredlangchain-*package (e.g.,anthropic->langchain-anthropic). - Sandbox dependencies: the
[sandbox].providervalue maps to its partner package (e.g.,daytona->langchain-daytona).
[sandbox]
Configure the isolated execution environment where the agent runs code. Sandboxes provide a container with a filesystem and shell access, so untrusted code cannot affect the host. For supported providers and advanced sandbox configuration, see sandboxes.
When omitted or set to provider = "none", the sandbox is disabled. Sandboxes are for if you need code execution or skill script execution.
Sandbox provider. Determines where the container runs. Supported values:
"none", "daytona", "modal", "runloop", "langsmith" (private beta). See sandbox integrations for provider details.Provider-specific template name for the sandbox environment.
Base Docker image for the sandbox container.
Sandbox lifecycle scope.
"thread" creates one sandbox per conversation. "assistant" shares a single sandbox across all conversations for the same assistant."thread"(default): Each conversation gets its own sandbox. Different threads get different sandboxes, but the same thread reuses its sandbox across turns. Use this when each conversation should start with a clean environment."assistant": All conversations share one sandbox. Files, installed packages, and other state persist across conversations. Use this when the agent maintains a long-lived workspace like a cloned repo.
.env
Place a .env file alongside deepagents.toml with your API keys:
Sandbox providers
Set[sandbox].provider in deepagents.toml and add the required env vars to .env. For available providers, see sandbox integrations. For lifecycle patterns and SDK usage, see sandboxes.
Deployment endpoints
The deployed server exposes:- MCP: call your agent as a tool from other agents
- A2A: multi-agent orchestration via A2A protocol
- Agent Protocol: standard API for building UIs
- Human-in-the-loop: approval gates for sensitive actions
- Memory: short-term and long-term memory access
Examples
A content writing agent that only needs a model and system prompt, with no code execution:deepagents.toml
deepagents.toml
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

