~/.deepagents/ directory. The main config files are:
| File | Format | Purpose |
|---|---|---|
config.toml | TOML | Model defaults, provider settings, constructor params, profile overrides, themes, update settings, MCP trust store |
.env | Dotenv | Global API keys and secrets |
hooks.json | JSON | External tool subscriptions to CLI lifecycle events |
.mcp.json | JSON | Global MCP server definitions |
Environment variables
The CLI loads environment variables from dotenv files so you don’t need toexport API keys in your shell profile or duplicate .env files across projects.
Loading order and precedence
Two.env files are loaded at startup:
- Project
.env— the.envfile in your current working directory (if present) - Global
~/.deepagents/.env— a single shared file that acts as a fallback for all projects
.env > global .env. Values already set in the shell are never overwritten—including on /reload.
DEEPAGENTS_CLI_ prefix
All CLI-specific environment variables use a DEEPAGENTS_CLI_ prefix (e.g., DEEPAGENTS_CLI_AUTO_UPDATE, DEEPAGENTS_CLI_DEBUG). See the CLI environment variable reference for the full list.
The prefix also works as an override mechanism for any environment variable the CLI reads, including third-party credentials. The CLI checks DEEPAGENTS_CLI_{NAME} first, then falls back to {NAME}:
~/.deepagents/.env
/reload, the CLI re-reads .env files and picks up prefixed values, so you can rotate keys without restarting.
Example
Store API keys once in~/.deepagents/.env:
.env in the project directory.
Config file
~/.deepagents/config.toml lets you customize model providers, set defaults, and pass extra parameters to model constructors.
Default and recent model
[models].default always takes priority over [models].recent. The /model command only writes to [models].recent, so your configured default is never overwritten by mid-session switches. To remove the default, use /model --default --clear or delete the default key from the config file.
Provider configuration
Each provider is a TOML table under[models.providers]:
A list of model names to show in the interactive
/model switcher for this provider. For providers that already ship with model profiles, any names you add here appear alongside the bundled ones, useful for newly released models that haven’t been added to the package yet. For arbitrary providers, this list is the only source of models in the switcher.Models listed here bypass the profile-based filtering criteria and always appear in the switcher. This makes it the recommended way to surface models that are excluded because their profile lacks tool_calling support or doesn’t exist yet.This key is optional. You can always pass any model name directly to /model or --model regardless of whether it appears in the switcher; the provider validates the name at request time.Override the environment variable name checked for credentials. Most chat model packages read from a default env var automatically. See the Provider reference table for which variable each provider checks.
Override the base URL used by the provider, if supported. Refer to your provider packages’ reference docs for more info.
Extra keyword arguments forwarded to the model constructor. Flat keys (e.g.,
temperature = 0) apply to every model from this provider. Model-keyed sub-tables (e.g., [params."gpt-4o"]) override individual values for that model only; the merge is shallow (model wins on conflict).(Advanced) Override fields in the model’s runtime profile (e.g.,
max_input_tokens). Flat keys apply to every model from this provider. Model-keyed sub-tables (e.g., [profile."claude-sonnet-4-5"]) override individual values for that model only; the merge is shallow (model wins on conflict). These overrides are applied after the model is created, so they take effect for context-limit display, auto-summarization, and any other feature that reads the profile.Used for arbitrary model providers. Fully-qualified Python class in
module.path:ClassName format. When set, the CLI imports and instantiates this class directly for provider <name>. The class must be a BaseChatModel subclass.Whether this provider appears in the
/model selector. Set to false to hide a provider that was auto-discovered from an installed package (e.g., a transitive dependency you don’t want cluttering the switcher). You can still use a disabled provider directly via /model provider:model or --model.Model constructor params
Any provider can use theparams table to pass extra arguments to the model constructor:
Per-model overrides
If a specific model needs different params, add a model-keyed sub-table underparams to override individual values without duplicating the entire provider config:
ollama:qwen3:4bgets{temperature: 0.5, num_ctx: 4000}— model overrides win.ollama:llama3gets{temperature: 0, num_ctx: 8192}— no override, provider-level params only.
CLI overrides with --model-params
For one-off adjustments without editing the config file, pass a JSON object via --model-params at launch or mid-session with the /model command:
Inside the TUI
--model-params cannot be combined with --default.
Profile overrides (Advanced)
Override fields in the model’s runtime profile to change how the CLI interprets model capabilities. The most common use case is loweringmax_input_tokens to trigger auto-summarization earlier — useful for testing or for constraining context usage:
params — the model-level value wins on conflict:
CLI profile overrides with --profile-override (Advanced)
To override model profile fields at runtime without editing the config file, pass a JSON object via --profile-override:
--profile-override.
--profile-override values persist across mid-session /model hot-swaps — switching models re-applies the override to the new model.
Custom base URL
Some provider packages accept abase_url to override the default endpoint. For example, langchain-ollama defaults to http://localhost:11434 via the underlying ollama client. To point it elsewhere, set base_url in your configuration:
Compatible APIs
For providers that expose APIs that are wire-compatible with OpenAI or Anthropic, you can use the existinglangchain-openai or langchain-anthropic packages by pointing base_url at the provider’s endpoint:
Any features added on top of the official spec by the provider will not be captured. If the provider offers a dedicated LangChain integration package, prefer that instead.
Adding models to the interactive switcher
Some providers (e.g.langchain-ollama) don’t bundle model profile data (see Provider reference for full listing). When this is the case, the interactive /model switcher won’t list models for that provider. You can fill in the gap by defining a models list in your config file for the provider:
/model switcher will now include an Ollama section with these models listed.
This is entirely optional. You can always switch to any model by specifying its full name directly:
Arbitrary providers
You can use any LangChainBaseChatModel subclass using class_path. The CLI imports and instantiates the class directly — no built-in provider package required.
api_key_env and base_url are optional. class_path providers are expected to handle their own authentication internally — useful when your model uses custom auth (JWT tokens, proprietary headers, mTLS, etc.) rather than a standard API key:
/model xyz:abc-xyz-1 or --model xyz:abc-xyz-1.
Deep Agents requires tool calling support. If your custom model supports tool calling but the CLI doesn’t know about it, declare it in the provider profile:Set
max_input_tokens to what your model supports to enable accurate context length tracking and auto-summarization.deepagents-cli:
my_custom:my-model-v1 (via /model or --model), the model name (my-model-v1) is passed as the model kwarg:
_PROFILES dict in <package>.data._profiles in lieu of defining them under the models key. See LangChain model profiles for more info.
Skills extra allowed directories
By default, when the CLI loads skills it validates that a resolved skill file path stays inside one of the standard skill directories. This prevents symlinks inside skill directories from reading arbitrary files outside those roots. If you store shared skill assets in a non-standard location and use symlinks from a standard skill directory to reference them, you can add that location to the containment allowlist. This does not add a new skill discovery location: skills are still only discovered from the standard directories.Paths added to the skill containment allowlist. Supports
~ expansion.DEEPAGENTS_CLI_EXTRA_SKILLS_DIRS environment variable as a colon-separated list:
/reload.
Themes
Use/theme to open an interactive theme selector. Navigate the list to preview themes in real-time, press Enter to persist your choice to config.toml.
The CLI ships with many built-in themes. The default theme is langchain, a dark theme with LangChain-branded colors. The selected theme is persisted under [ui]:
User-defined themes
Define custom themes under[themes.<name>] sections in config.toml. Each section requires label (str). dark (bool) defaults to false if omitted — set to true for dark themes. All color fields are optional — omitted fields fall back to the built-in dark or light palette based on the dark flag.
/theme selector.
Override built-in theme colors
To tweak a built-in theme’s colors without creating a new theme, use a[themes.<builtin-name>] section. Only color fields are read — label and dark are inherited from the built-in:
[themes.*] sections take effect on /reload.
Auto-update
The CLI can automatically check for and install updates.- Config file
- Environment variable
/update slash command, which bypasses the cache and reports success or failure inline.
After an upgrade, the CLI shows a “what’s new” banner on the next launch with a link to the changelog.
At session exit, if a newer version was detected during the session, an update banner is displayed as a reminder.
Managed deployments
The install script supports running as root, targeting macOS MDM tools (Kandji, Jamf, etc.) that execute scripts in a minimal root environment. Whenid -u is 0, the script:
- Resolves the real console user’s
HOME(via/dev/consoleor a/Usersdirectory scan) chowns all created files back to the target user after each install step
DEEPAGENTS_CLI_AUTO_UPDATE=1 in the user’s shell profile or deploy a config.toml with [update] auto_update = true to ~/.deepagents/config.toml. To suppress automatic updates and update checks entirely, set DEEPAGENTS_CLI_NO_UPDATE_CHECK=1.
CLI environment variable reference
All CLI-specific environment variables use theDEEPAGENTS_CLI_ prefix. See DEEPAGENTS_CLI_ prefix for how the prefix also works as an override for third-party credentials.
Enable automatic CLI updates (
1, true, or yes).Enable verbose debug logging to a file.
Path for the debug log file.
Colon-separated paths added to the skill containment allowlist.
Override the LangSmith project name for agent traces. See Tracing with LangSmith.
Disable automatic update checking when set.
Comma-separated shell commands to allow (or
recommended / all).Attach a user identifier to LangSmith trace metadata.
External editor
PressCtrl+X or type /editor to compose prompts in an external editor. The CLI checks $VISUAL, then $EDITOR, then falls back to vi (macOS/Linux) or notepad (Windows). GUI editors (VS Code, Cursor, Zed, Sublime Text, Windsurf) automatically receive a --wait flag so the CLI blocks until you close the file.
Hooks
Hooks let external programs react to CLI lifecycle events. Configure commands in~/.deepagents/hooks.json and the CLI pipes a JSON payload to each matching command’s stdin whenever an event fires.
Hooks run fire-and-forget in a background thread — they never block the CLI and failures are logged without interrupting your session.
Setup
Create~/.deepagents/hooks.json:
~/deepagents-events.log.
Hook configuration
The config file contains a singlehooks array. Each entry has:
Command and arguments to run. No shell expansion: use
["bash", "-c", "..."] if needed.Event names to subscribe to. Omit or leave empty to receive all events.
events filter, so it receives every event the CLI emits.
Payload format
Each hook command receives a JSON object on stdin with an"event" key plus event-specific fields:
Events reference
session.start
Fired when an agent session begins (both interactive and non-interactive modes).
The session thread identifier.
session.end
Fired when a session exits.
The session thread identifier.
user.prompt
Fired in interactive mode when the user submits a chat message.
No additional fields.
input.required
Fired when the agent requires human input (human-in-the-loop interrupt).
No additional fields.
permission.request
Fired before the approval dialog when one or more tool calls need user permission.
Names of the tools requesting approval.
tool.error
Fired when a tool call returns an error.
Names of the tool(s) that errored.
task.complete
Fired when the agent finishes its current task (the streaming loop ends without further interrupts).
The session thread identifier.
context.compact
Fired before the CLI compacts (summarizes) the conversation context.
No additional fields.
Execution model
- Background thread: Hook subprocesses run in a thread via
asyncio.to_threadso the main event loop is never blocked. - Concurrent dispatch: When multiple hooks match an event, they run concurrently in a thread pool.
- 5-second timeout: Each command has a 5-second timeout. Commands that exceed this are killed.
- Fire-and-forget: Errors are caught per-hook and logged at debug/warning level. A failing hook never crashes or stalls the CLI.
- Lazy loading: The config file is read once on the first event dispatch and cached for the rest of the session.
- No shell expansion: Commands are executed directly (not through a shell). Wrap in
["bash", "-c", "..."]if you need shell features like pipes or variable expansion.
Hook examples
Log all events to a file
Log all events to a file
Desktop notification on task completion (macOS)
Desktop notification on task completion (macOS)
Python handler
Python handler
Write a handler script that reads the JSON payload from stdin:
my_handler.py
~/.deepagents/hooks.json
Security considerations
Hooks follow the same trust model as Git hooks or shell aliases — any user who can write to~/.deepagents/hooks.json can execute arbitrary commands. This is by design:
- No command injection: Payload data flows only to stdin as JSON, never to command-line arguments.
json.dumpshandles escaping. - No shell by default: Commands run with
shell=False, preventing shell injection. - Malformed config: Invalid JSON or unexpected types produce logged warnings, not security issues.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

