> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Deep Agents overview

> Build agents that can plan, use subagents, and leverage file systems for complex tasks

The easiest way to start building agents and applications powered by LLMs—with built-in capabilities for task planning, file systems for context management, subagent-spawning, and long-term memory.
You can use deep agents for any task, including complex, multi-step tasks.

We think of `deepagents` as an ["agent harness"](/oss/python/concepts/products#agent-harnesses-like-the-deep-agents-sdk). It is the same core tool calling loop as other agent frameworks, but with built-in tools and capabilities.

[`deepagents`](https://pypi.org/project/deepagents/) is a standalone library built on top of [LangChain](/oss/python/langchain/)'s core building blocks for agents. It uses the [LangGraph](/oss/python/langgraph/) runtime for durable execution, streaming, human-in-the-loop, and other features.

The [`deepagents` repository](https://github.com/langchain-ai/deepagents) contains:

* **Deep Agents SDK**: A package for building agents that can handle any task
* [**Deep Agents CLI**](/oss/python/deepagents/cli): A terminal coding agent built on the Deep Agents SDK
* [**ACP integration**](/oss/python/deepagents/acp): An Agent Client Protocol connector for using deep agents in code editors like Zed

[LangChain](/oss/python/langchain/) is the framework that provides the core building blocks for your agents.
To learn more about the differences between LangChain, LangGraph, and Deep Agents, see [Frameworks, runtimes, and harnesses](/oss/python/concepts/products). For a side-by-side comparison with Anthropic's harness, see [Deep Agents vs. Claude Agent SDK](/oss/python/deepagents/comparison).

## <Icon icon="wand" /> Create a deep agent

<Tabs>
  <Tab title="Google">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-google-genai
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="google_genai:gemini-3.1-pro-preview",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="OpenAI">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-openai
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="openai:gpt-5.4",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="Anthropic">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-anthropic
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="anthropic:claude-sonnet-4-6",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="OpenRouter">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-openrouter
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="openrouter:anthropic/claude-sonnet-4-6",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="Fireworks">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-fireworks
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="fireworks:accounts/fireworks/models/qwen3p5-397b-a17b",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="Baseten">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-baseten
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="baseten:zai-org/GLM-5",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>

  <Tab title="Ollama">
    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    # pip install -qU deepagents langchain-ollama
    from deepagents import create_deep_agent

    def get_weather(city: str) -> str:
        """Get weather for a given city."""
        return f"It's always sunny in {city}!"

    agent = create_deep_agent(
        model="ollama:devstral-2",
        tools=[get_weather],
        system_prompt="You are a helpful assistant",
    )

    # Run the agent
    agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
    )
    ```
  </Tab>
</Tabs>

See the [Quickstart](/oss/python/deepagents/quickstart/) and [Customization guide](/oss/python/deepagents/customization/) to get started building your own agents and applications with Deep Agents.

<Tip>
  Trace requests, debug agent behavior, and evaluate outputs with [LangSmith](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=oss-deepagents-overview). Follow the [tracing quickstart](/langsmith/trace-with-langchain) to get set up. When ready for production, [deploy to LangSmith Cloud](/langsmith/deploy-to-cloud) for managed hosting.
</Tip>

## When to use the Deep Agents

Use the **Deep Agents SDK** when you want to build agents that can:

* **Handle complex, multi-step tasks** that require planning and decomposition
* **Manage large amounts of context** through file system tools and [summarization](/oss/python/deepagents/context-engineering#summarization)
* **Swap filesystem backends** to use in-memory state, local disk, durable stores, [sandboxes](/oss/python/deepagents/sandboxes), or [your own custom backend](/oss/python/deepagents/backends)
* **Execute shell commands** via the `execute` tool when using a [sandbox backend](/oss/python/deepagents/sandboxes)
* **Delegate work** to specialized subagents for context isolation
* **Persist memory** across conversations and threads
* **Control filesystem access** with declarative [permission rules](/oss/python/deepagents/permissions) that restrict which files agents can read or write
* **Require human approval** for sensitive operations with [human-in-the-loop](/oss/python/deepagents/human-in-the-loop) workflows
* **Use any model** — [provider agnostic](/oss/python/deepagents/models) across frontier and open models

For building simpler agents, consider using LangChain's [`create_agent`](/oss/python/langchain/agents) or building a custom [LangGraph](/oss/python/langgraph/overview) workflow.

## Core capabilities

<Card title="Planning and task decomposition" icon="timeline">
  Deep Agents include a built-in [`write_todos`](/oss/python/langchain/middleware/built-in#to-do-list) tool that enables agents to break down complex tasks into discrete steps, track progress, and adapt plans as new information emerges.
</Card>

<Card title="Context management" icon="scissors">
  File system tools ([`ls`](/oss/python/deepagents/harness#virtual-filesystem-access), [`read_file`](/oss/python/deepagents/harness#virtual-filesystem-access), [`write_file`](/oss/python/deepagents/harness#virtual-filesystem-access), [`edit_file`](/oss/python/deepagents/harness#virtual-filesystem-access)) allow agents to offload large context to in-memory or filesystem storage, preventing context window overflow and enabling work with variable-length tool results. Auto-summarization compacts older conversation messages when the context window grows long, keeping the agent effective across extended sessions.
</Card>

<Card title="Shell execution" icon="terminal">
  When using a [sandbox backend](/oss/python/deepagents/sandboxes), agents get an `execute` tool to run shell commands for tests, builds, git operations, and system tasks. Sandbox backends provide isolation so agents can execute code without compromising your host system.
</Card>

<Card title="Pluggable filesystem backends" icon="plug">
  The virtual filesystem is powered by [pluggable backends](/oss/python/deepagents/backends) that you can swap to fit your use case. Choose from in-memory state, local disk, LangGraph store for cross-thread persistence, [sandboxes](/oss/python/deepagents/sandboxes) for isolated code execution (Modal, Daytona, Deno), or combine multiple backends with composite routing. You can also implement your own custom backend.
</Card>

<Card title="Subagent spawning" icon="users-group">
  A built-in `task` tool enables agents to spawn specialized subagents for context isolation. This keeps the main agent's context clean while still going deep on specific subtasks.
</Card>

<Card title="Long-term memory" icon="database">
  Extend agents with persistent memory across threads using LangGraph's [Memory Store](/oss/python/langgraph/persistence#memory-store). Agents can save and retrieve information from previous conversations.
</Card>

<Card title="Filesystem permissions" icon="lock">
  Declare [permission rules](/oss/python/deepagents/permissions) that control which files and directories agents can read or write. Subagents can inherit or override the parent's rules.
</Card>

<Card title="Human-in-the-loop" icon="user-check">
  Configure [human approval](/oss/python/deepagents/human-in-the-loop) for sensitive tool operations using LangGraph's interrupt capabilities. Control which tools require confirmation before execution.
</Card>

<Card title="Skills" icon="puzzle">
  Extend agents with reusable [skills](/oss/python/deepagents/skills) that provide specialized workflows, domain knowledge, and custom instructions.
</Card>

<Card title="Smart defaults" icon="wand">
  Ships with opinionated system prompts that teach the model how to use its tools effectively — plan before acting, verify work, and manage context. Customize or replace the defaults as needed.
</Card>

## Get started

<CardGroup cols={2}>
  <Card title="Quickstart" icon="rocket" href="/oss/python/deepagents/quickstart">
    Build your first deep agent
  </Card>

  <Card title="Customization" icon="adjustments" href="/oss/python/deepagents/customization">
    Learn about customization options
  </Card>

  <Card title="Models" icon="cpu" href="/oss/python/deepagents/models">
    Configure models and providers
  </Card>

  <Card title="Backends" icon="plug" href="/oss/python/deepagents/backends">
    Choose and configure pluggable filesystem backends
  </Card>

  <Card title="Sandboxes" icon="cube" href="/oss/python/deepagents/sandboxes">
    Execute code in isolated environments
  </Card>

  <Card title="Permissions" icon="lock" href="/oss/python/deepagents/permissions">
    Control filesystem access with permission rules
  </Card>

  <Card title="Human-in-the-loop" icon="user-check" href="/oss/python/deepagents/human-in-the-loop">
    Configure approval for sensitive operations
  </Card>

  <Card title="CLI" icon="terminal" href="/oss/python/deepagents/cli/overview">
    Use the Deep Agents CLI
  </Card>

  <Card title="ACP" icon="plug-connected" href="/oss/python/deepagents/acp">
    Use deep agents in code editors via ACP
  </Card>

  <Card title="Reference" icon="external-link" href="https://reference.langchain.com/python/deepagents/">
    See the `deepagents` API reference
  </Card>
</CardGroup>

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/oss/deepagents/overview.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
