
- File operations - read, write, and edit files with tools that enable agents to manage and modify code and documentation.
- Shell execution - execute commands to run tests, build projects, manage dependencies, and interact with version control.
- Web search - search the web for up-to-date information and documentation (requires Tavily API key).
- HTTP requests - make HTTP requests to APIs and external services for data fetching and integration tasks.
- Task planning and tracking - break down complex tasks into discrete steps and track progress.
- Memory storage and retrieval - store and retrieve information across sessions, enabling agents to remember project conventions and learned patterns.
- Context compaction & offloading - summarize older conversation messages and offload originals to storage, freeing context window space during long sessions.
- Human-in-the-loop - require human approval for sensitive tool operations.
- Skills - extend agent capabilities with custom expertise and instructions.
- MCP tools - load external tools from Model Context Protocol servers.
- Tracing - trace agent operations in LangSmith for observability and debugging.
Full list of built-in tools
Full list of built-in tools
Built-in tools
The agent comes with the following built-in tools which are available without configuration:| Tool | Description | Human-in-the-Loop |
|---|---|---|
ls | List files and directories | - |
read_file | Read contents of a file; multimodal content for select models | - |
write_file | Create or overwrite a file | Required1 |
edit_file | Make targeted edits to existing files | Required1 |
glob | Find files matching a pattern | - |
grep | Search for text patterns across files | - |
execute | Execute shell commands locally or in a remote sandbox | Required1 |
web_search | Search the web using Tavily | Required1 |
fetch_url | Fetch and convert web pages to markdown | Required1 |
task | Delegate work to subagents for parallel execution | Required1 |
ask_user | Ask the user free-form or multiple-choice questions | - |
compact_conversation | Summarize older messages, offload originals to backend storage, and replace them in context with the summary | Mixed2 |
write_todos | Create and manage task lists for complex work | - |
When running the CLI non-interactively (via
-n or piped stdin), shell execution is disabled by default even with -y/--auto-approve. Use -S/--shell-allow-list to allowlist specific commands (e.g., -S "pytest,git,make"), recommended for safe defaults, or all to permit any command. The DEEPAGENTS_CLI_SHELL_ALLOW_LIST environment variable is also supported. See Non-interactive mode and piping for more details./conversation_history/{thread_id}.md), replacing them in context with the summary. The agent can still retrieve the full history from the offloaded file if needed. The compact_conversation tool lets the agent (or you) trigger offloading on demand. When called as a tool, it requires user approval by default.The Deep Agents CLI is not officially supported on Windows. Windows users can try running it under Windows Subsystem for Linux (WSL).
Quickstart
Set model credentials
Export your provider’s API key as an environment variable or add it to The CLI works with any LLM that supports tool calling — OpenAI, Anthropic, Google, Ollama, and many more. See Providers for setup details.
~/.deepagents/.env:Install and run
The CLI ships with OpenAI, Anthropic, and Google support by default. Other providers (Ollama, Groq, xAI, etc.) are installed as optional extras — see Providers for details.
Give the agent a task
Enable tracing (optional)
Add tracing keys to
~/.deepagents/.env to see agent operations, tool calls, and decisions in LangSmith:~/.deepagents/.env
Providers
The CLI is intentionally lightweight, shipping with OpenAI, Claude, and Gemini support out of the box. Each additional model provider is a separate dependency, so you only pull in what you need.--model at launch or switch mid-session with /model.
Interactive mode
Type naturally as you would in a chat interface. The agent will use its built-in tools, skills, and memory to help you with tasks.Slash commands
Slash commands
Use these commands within the CLI session:
/model- Switch models or open the interactive model selector. See Switch models for details/remember [context]- Review conversation and update memory and skills. Optionally pass additional context/skill:<name> [args]- Directly invoke a skill by name. The skill’sSKILL.mdinstructions are injected into the prompt along with any arguments you provide/skill-creator [args]- Guide for creating effective agent skills/offload(alias/compact) - Free up context window space by offloading messages to storage with a summary placeholder. The agent can retrieve the full history from the offloaded file if needed/tokens- Display current context window token usage breakdown/clear- Clear conversation history and start a new thread/threads- Browse and resume previous conversation threads/mcp- Show active MCP servers and tools/reload- Re-read.envfiles, refresh configuration, and re-discover skills without restarting. Conversation state is preserved. SeeDEEPAGENTS_CLI_prefix for override behavior/theme- Open the interactive theme selector to switch color themes. Built-in themes are available plus any user-defined themes/update- Check for and install CLI updates inline. Detects your install method (uv, Homebrew, pip) and runs the appropriate upgrade command/auto-update- Toggle automatic updates on or off/trace- Open the current thread in LangSmith (requiresLANGSMITH_API_KEY)/editor- Open the current prompt in your external editor ($VISUAL/$EDITOR). See External editor/changelog- Open the CLI changelog in your browser/docs- Open the documentation in your browser/feedback- Open the GitHub issues page to file a bug report or feature request/version- Show installeddeepagents-cliand SDK versions/help- Show help and available commands/quit- Exit the CLI
Shell commands
Shell commands
Type
! to enter shell mode, then type your command.Keyboard shortcuts
Keyboard shortcuts
General
| Shortcut | Action |
|---|---|
Enter | Submit prompt |
Shift+Enter, Ctrl+J, Alt+Enter, or Ctrl+Enter | Insert newline |
Ctrl+A | Select all text in input |
@filename | Auto-complete files and inject content |
Shift+Tab or Ctrl+T | Toggle auto-approve |
Ctrl+U | Delete to start of line |
Ctrl+X | Open prompt in external editor |
Ctrl+O | Expand/collapse the most recent tool output |
Escape | Interrupt current operation |
Ctrl+C | Interrupt or quit |
Ctrl+D | Exit |
Non-interactive mode and piping
Use-n to run a single task without launching the interactive UI:
-n or -m, the piped content appears first, followed by the text you pass to the flag.
The maximum piped input size is 10 MiB.
-S/--shell-allow-list to enable specific commands (e.g., -S "pytest,git,make"), recommended for safe defaults, or all to permit any command.
Clean output and buffering
Clean output and buffering
Use In non-interactive mode, the agent is instructed to make reasonable assumptions and proceed autonomously rather than ask clarifying questions. It also favors non-interactive command variants (e.g.,
-q for clean output suitable for piping into other commands, and --no-stream to buffer the full response (instead of streaming) before writing to stdout:npm init -y, apt-get install -y).Shell execution examples
Shell execution examples
Switch models
You can switch models during a session without restarting the CLI using the/model command, or at launch with the --model flag:
/model to open an interactive model selector that displays available models grouped by provider.
For full details on switching models, setting a default, and adding custom model providers, see Model providers.
Interactive model selector
Interactive model selector
The selector shows a detail footer for the highlighted model with context window size, input modalities (text, image, audio, PDF, video), and capabilities (reasoning, tool calling, structured output). Values overridden by
--profile-override or config.toml are marked with a yellow * prefix.Model parameters
Model parameters
Pass extra model constructor parameters when switching mid-session using These are session-only overrides and take the highest priority, overriding values from config file
--model-params:params. --model-params cannot be combined with --default.Configuration
The CLI stores all configuration under~/.deepagents/. Within that directory, each agent gets its own subdirectory (default: agent):
| Path | Purpose |
|---|---|
~/.deepagents/config.toml | Model defaults, provider settings, constructor params, profile overrides, themes, update settings, MCP trust store |
~/.deepagents/.env | Global API keys and secrets. See configuration |
~/.deepagents/hooks.json | Lifecycle event hooks (session start/end, task complete, etc.) |
~/.deepagents/<agent_name>/ | Per-agent memory, skills, and conversation threads |
.deepagents/ (project root) | Project-specific memory and skills, loaded when running inside a git repo |
config.toml schema, provider parameters, profile overrides, and hook configuration — see Configuration.
Memory
There are two primary ways to customize any agent:-
Memory:
AGENTS.mdfiles and auto-saved memories that persist across sessions. Use memory for general coding style, preferences, and learned conventions. - Skills: Global and project-specific context, conventions, guidelines, or instructions. Use skills for context that is only required when performing specific tasks.
/remember to explicitly prompt the agent to update its memory and skills from the current conversation.
Automatic memory
As you use the agent, it automatically stores information in~/.deepagents/<agent_name>/memories/ as markdown files using a memory-first protocol:
- Research: Searches memory for relevant context before starting tasks
- Response: Checks memory when uncertain during execution
- Learning: Automatically saves new information for future sessions
AGENTS.md files
AGENTS.md files provide persistent context that is always loaded at session start:
- Global:
~/.deepagents/<agent_name>/AGENTS.md— loaded every session. - Project:
.deepagents/AGENTS.mdin any git project root — loaded when the CLI is run from within that project.
How memory works
How memory works
The agent may also read its memory files when answering project-specific questions or when you reference past work or patterns.The agent will update
AGENTS.md as you provide information on how it should behave, feedback on its work, or instructions to remember something.
It will also update its memory if it identifies patterns or preferences from your interactions.To add more structured project knowledge in additional memory files, add them in .deepagents/ and reference them in the AGENTS.md file.
You must reference additional files in the AGENTS.md file for the agent to be aware of them.
The additional files will not be read on startup but the agent can reference and update them when needed.When to use global vs. project AGENTS.md
When to use global vs. project AGENTS.md
Global
AGENTS.md (~/.deepagents/agent/AGENTS.md)- Your personality, style, and universal coding preferences
- General tone and communication style
- Universal coding preferences (formatting, type hints, etc.)
- Tool usage patterns that apply everywhere
- Workflows and methodologies that don’t change per-project
AGENTS.md (.deepagents/AGENTS.md in project root)- Project-specific context and conventions
- Project architecture and design patterns
- Coding conventions specific to this codebase
- Testing strategies and deployment processes
- Team guidelines and project structure
Use skills
Skills are reusable agent capabilities that provide specialized workflows and domain knowledge. You can use skills to provide your deep agent with new capabilities and expertise. Deep agent skills follow the Agent Skills standard. Once you have added skills your deep agent will automatically make use of them and update them as you use the agent and provide it with additional information. Use/remember to explicitly prompt the agent to update skills and memory from the current conversation.
Add skills
Add skills
-
Create a skill:
This generates:
-
Open the generated
SKILL.mdand edit the file to include your instructions. -
Optionally add additional scripts or other resources to the
test-skillfolder. For more information, see Examples.
Install community skills
Install community skills
You can use tools like Vercel’s Skills CLI to install community Agent Skills in your environment and make them available to your deep agents:Global installs (
-g) symlink skills into ~/.deepagents/agent/skills/ — the default agent’s user-level skills directory. Project-level installs (omit -g) place skills in .deepagents/skills/ relative to the current directory, making them available to any agent running in that project regardless of agent name.Global installs target the default
agent directory only. If you use a custom-named agent, either use project-level installs or manually symlink the skill into ~/.deepagents/{your-agent}/skills/.Skill discovery
Skill discovery
At startup, the CLI discovers skills from both Deep Agents and shared alias directories:When duplicate skill names exist, later-precedence directories override earlier ones (see App data).For project-specific skills, the project’s root folder must have a
.git folder.
When you start the CLI from anywhere within the project’s folder, the CLI will find the project’s root folder by checking for a containing .git folder.For each skill, the CLI reads the name and the description from the SKILL.md file’s frontmatter.
As you use the CLI, if a task matches the skill’s description, the agent will read the skill file and follow its instructions.You can also invoke a skill directly with /skill:<name> [args]. Skill discovery runs at startup and again on /reload.Invoke a skill from the command line
Invoke a skill from the command line
Use
--skill to invoke a skill at launch without typing a slash command interactively:--skill also works in non-interactive mode:--skill with --quiet or --no-stream requires -n (non-interactive mode).List skills
List skills
Subagents
Define custom subagents as markdown files so the CLI agent can delegate specialized tasks to them. Each subagent lives in its own folder with anAGENTS.md file:
name and description (same as the SubAgent dictionary spec). The markdown body becomes the subagent’s system_prompt. In addition to the base spec, AGENTS.md files support an optional model frontmatter field that overrides the main agent’s model for this subagent. Uses the provider:model-name format (e.g., anthropic:claude-opus-4-6, openai:gpt-5.4). Omit to inherit the main agent’s model.
Other
SubAgent fields (tools, middleware, interrupt_on, skills) are currently not configurable via AGENTS.md frontmatter — custom subagents defined this way inherit the main agent’s tools. Use the SDK directly for full control.File format
File format
Subagent
AGENTS.md files use YAML frontmatter followed by a markdown body:Example: cost-efficient subagents
Example: cost-efficient subagents
Use a cheaper, faster model for simple delegation tasks while keeping the main agent on a more capable model:This overrides the built-in general-purpose subagent, routing all delegated tasks to a cheaper model. See Override the general-purpose subagent for more.
Use MCP tools
Extend the CLI with tools from external MCP (Model Context Protocol) servers. Place a.mcp.json at your project root and the CLI discovers it automatically. See the MCP tools guide for configuration format, auto-discovery, and troubleshooting.
Use remote sandboxes
The CLI uses the sandbox as tool pattern: the CLI process (LLM loop, memory, tool dispatch) runs on your machine, but agent tool calls (read_file, write_file, execute, etc.) target the remote sandbox, not your local filesystem. To get files into the sandbox, use a setup script or the provider’s file transfer APIs (see Working with files).
For a deeper look at sandbox architecture, integration patterns, and security best practices, see Sandboxes.
LangSmith sandbox support is included with the CLI by default. AgentCore, Modal, Daytona, and Runloop require installing extras.
Install provider dependency
- LangSmith
- AgentCore
- Daytona
- Runloop
- Modal
Included by default when installing
deepagents-cli. No extra installation needed.Sandbox flags and examples
Sandbox flags and examples
| Flag | Description |
|---|---|
--sandbox TYPE | Sandbox provider to use: langsmith, agentcore, modal, daytona, or runloop (default: none) |
--sandbox-id ID | Reuse an existing sandbox by ID instead of creating a new one. Skips creation and cleanup. Refer to your sandbox documentation for more |
--sandbox-setup PATH | Path to a setup script to run inside the sandbox upon creation |
Setup scripts
Setup scripts
Use The CLI expands
--sandbox-setup to run a shell script inside the sandbox after creation. This is useful for cloning repos, installing dependencies, and configuring environment variables.setup.sh
${VAR} references in setup scripts using your local environment variables. Store secrets in a local .env file for the setup script to access.Tracing with LangSmith
Enable LangSmith tracing to see agent operations, tool calls, and decisions in a LangSmith project. Add your tracing keys to~/.deepagents/.env so tracing is enabled in every session without per-shell exports:
~/.deepagents/.env
.env in the project directory. See environment variables for the full loading order.
You can also set these as shell environment variables if you prefer. Shell exports always take precedence over .env values, so this is a good option for temporary overrides or testing:
Separate agent traces from app traces
Separate agent traces from app traces
When invoking the CLI programmatically from a LangChain application (e.g., as a subprocess in non-interactive mode), both your app and the CLI produce LangSmith traces. By default, these all land in the same project.To send CLI traces to a dedicated project, set Then configure This keeps your app-level observability clean while still capturing the agent’s internal execution in a separate project.You can also scope LangSmith credentials to the CLI using the
DEEPAGENTS_CLI_LANGSMITH_PROJECT:~/.deepagents/.env
LANGSMITH_PROJECT for your parent application’s traces:~/.deepagents/.env
DEEPAGENTS_CLI_ prefix (e.g., DEEPAGENTS_CLI_LANGSMITH_API_KEY)./trace to print the URL and open it in your browser.
Command reference
Command-line options
Command-line options
| Option | Description |
|---|---|
-a, --agent NAME | Use named agent with separate memory (default: agent) |
-M, --model MODEL | Use a specific model (provider:model) |
--model-params JSON | Extra kwargs to pass to the model as a JSON string (e.g., '{"temperature": 0.7}') |
--default-model [MODEL] | Set the default model |
--clear-default-model | Clear the default model |
-r, --resume [ID] | Resume a session: -r for most recent, -r <ID> for a specific thread |
-m, --message TEXT | Initial prompt to auto-submit when the session starts (interactive mode) |
--skill NAME | Invoke a skill at startup |
-n, --non-interactive TEXT | Run a single task non-interactively and exit. Shell is disabled unless --shell-allow-list is set |
-q, --quiet | Clean output for piping—only the agent’s response goes to stdout. Requires -n or piped stdin |
--no-stream | Buffer the full response and write to stdout at once instead of streaming. Requires -n or piped stdin |
--stdin | Read input from stdin explicitly instead of auto-detection. Errors clearly when stdin is unavailable or is a TTY |
-y, --auto-approve | Auto-approve all tool calls without prompting (disables human-in-the-loop). Toggle with Shift+Tab during an interactive session |
-S, --shell-allow-list LIST | Comma-separated shell commands to auto-approve, 'recommended' for safe defaults, or 'all' to allow any command. Applies to both -n and interactive modes |
--json | Emit machine-readable JSON from management subcommands (agents, threads, skills, update). Output envelope: {"schema_version": 1, "command": "...", "data": ...} |
--sandbox TYPE | Remote sandbox for code execution: none (default), langsmith, agentcore, modal, daytona, runloop. LangSmith is included; AgentCore/Modal/Daytona/Runloop require extras |
--sandbox-id ID | Reuse an existing sandbox (skips creation and cleanup) |
--sandbox-setup PATH | Path to setup script to run in sandbox after creation |
--mcp-config PATH | Add an explicit MCP config as the highest-precedence source (merged with auto-discovered configs) |
--no-mcp | Disable all MCP tool loading |
--trust-project-mcp | Trust project-level MCP configs with stdio servers (skip approval prompt) |
--profile-override JSON | Override model profile fields as a JSON string (e.g., '{"max_input_tokens": 4096}'). Merged on top of config file profile overrides |
--acp | Run as an ACP server over stdio instead of launching the interactive UI |
-v, --version | Display version |
-h, --help | Show help |
CLI commands
CLI commands
| Command | Description |
|---|---|
deepagents help | Show help |
deepagents agents list | List all agents (alias: ls) |
deepagents agents reset --agent NAME | Clear agent memory and reset to default. Supports --dry-run |
deepagents agents reset --agent NAME --target SOURCE | Copy memory from another agent |
deepagents update | Check for and install CLI updates |
deepagents skills list [--project] | List all skills (alias: ls) |
deepagents skills create NAME [--project] | Create a new skill with template SKILL.md. Idempotent — re-creating an existing skill prints an informational message instead of an error |
deepagents skills info NAME [--project] | Show detailed information about a skill |
deepagents skills delete NAME [--project] [-f] | Delete a skill and its contents. Supports --dry-run |
deepagents threads list [--agent NAME] [--limit N] | List sessions (alias: ls). Default limit: 20. -n is a short flag for --limit. Additional flags: --sort {created,updated}, --branch TEXT (filter by git branch), -v/--verbose (show all columns including branch, created time, and initial prompt), -r/--relative (relative timestamps) |
deepagents threads delete ID | Delete a session. Supports --dry-run |
--json for machine-readable output. See command-line options for details.Destructive commands (agents reset, skills delete, threads delete) support --dry-run to preview what would happen without making changes. In JSON mode, --dry-run returns the same envelope with a dry_run: true field.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

