Use this file to discover all available pages before exploring further.
This guide walks you through creating your first deep agent with planning, file system tools, and subagent capabilities. You’ll build a research agent that can conduct research and write reports.
Using an AI coding assistant?
Install the LangChain Docs MCP server to give your agent access to up-to-date LangChain documentation and examples.
Install LangChain Skills to improve your agent’s performance on LangChain ecosystem tasks.
# Local: Ollama must be running on your machine# Cloud: Set your Ollama API key for hosted inferenceexport OLLAMA_API_KEY="your-api-key"export TAVILY_API_KEY="your-tavily-api-key"
# Set the API key for your providerexport <PROVIDER>_API_KEY="your-api-key"export TAVILY_API_KEY="your-tavily-api-key"
Deep Agents work with any LangChain chat model. Set the API key for your provider.
import { createDeepAgent } from "deepagents";// System prompt to steer the agent to be an expert researcherconst researchInstructions = `You are an expert researcher. Your job is to conduct thorough research and then write a polished report.You have access to an internet search tool as your primary means of gathering information.## \`internet_search\`Use this to run an internet search for a given query. You can specify the max number of results to return, the topic, and whether raw content should be included.`;
Pick a model from your provider. By default, createDeepAgent uses claude-sonnet-4-6. Pass a model string to use a different provider — see Suggested models for the full list.
Deep Agents have built-in streaming for real-time updates from agent execution using LangGraph.
This allows you to observe output progressively and review and debug agent and subagent work, such as tool calls, tool results, and LLM responses.