Prerequisites
Before you begin, make sure you have an API key from a model provider (e.g., Anthropic, OpenAI).Deep agents require a model that supports tool calling. See customization for how to configure your model.
Step 1: Install dependencies
This guide uses Tavily as an example search provider, but you can substitute any search API (e.g., DuckDuckGo, SerpAPI, Brave Search).
Step 2: Set up your API keys
Step 3: Create a search tool
Step 4: Create a deep agent
Step 5: Run the agent
How does it work?
Your deep agent automatically:- Plans its approach using the built-in
write_todostool to break down the research task. - Conducts research by calling the
internet_searchtool to gather information. - Manages context by using file system tools (
write_file,read_file) to offload large search results. - Spawns subagents as needed to delegate complex subtasks to specialized subagents.
- Synthesizes a report to compile findings into a coherent response.
Examples
For agents, patterns, and applications you can build with Deep Agents, see Examples.Streaming
Deep agents have built-in streaming for real-time updates from agent execution using LangGraph. This allows you to observe output progressively and review and debug agent and subagent work, such as tool calls, tool results, and LLM responses.Next steps
Now that you’ve built your first deep agent:- Customize your agent: Learn about customization options, including custom system prompts, tools, and subagents.
- Add long-term memory: Enable persistent memory across conversations.
- Deploy to production: Learn about deployment options for LangGraph applications.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.