~/.deepagents/ directory. The main config files are:
| File | Format | Purpose |
|---|---|---|
config.toml | TOML | Model defaults, provider settings, constructor params, profile overrides, MCP trust store |
hooks.json | JSON | External tool subscriptions to CLI lifecycle events |
.mcp.json | JSON | MCP server definitions (also auto-discovered from project directories) |
Config file
~/.deepagents/config.toml lets you customize model providers, set defaults, and pass extra parameters to model constructors.
Default and recent model
[models].default always takes priority over [models].recent. The /model command only writes to [models].recent, so your configured default is never overwritten by mid-session switches. To remove the default, use /model --default --clear or delete the default key from the config file.
Provider configuration
Each provider is a TOML table under[models.providers]:
A list of model names to show in the interactive
/model switcher for this provider. For providers that already ship with model profiles, any names you add here appear alongside the bundled ones — useful for newly released models that haven’t been added to the package yet. For arbitrary providers, this list is the only source of models in the switcher.Models listed here bypass the profile-based filtering criteria and always appear in the switcher. This makes it the recommended way to surface models that are excluded because their profile lacks tool_calling support or doesn’t exist yet.This key is optional. You can always pass any model name directly to /model or --model regardless of whether it appears in the switcher; the provider validates the name at request time.Optionally override the environment variable name checked for credentials.
Optionally override the base URL used by the provider, if supported. Refer to your provider packages’ reference docs for more info.
Extra keyword arguments forwarded to the model constructor. Flat keys (e.g.,
temperature = 0) apply to every model from this provider. Model-keyed sub-tables (e.g., [params."gpt-4o"]) override individual values for that model only; the merge is shallow (model wins on conflict).(Advanced) Override fields in the model’s runtime profile (e.g.,
max_input_tokens). Flat keys apply to every model from this provider. Model-keyed sub-tables (e.g., [profile."claude-sonnet-4-5"]) override individual values for that model only; the merge is shallow (model wins on conflict). These overrides are applied after the model is created, so they take effect for context-limit display, auto-summarization, and any other feature that reads the profile.Used for arbitrary model providers. Optional fully-qualified Python class in
module.path:ClassName format. When set, the CLI imports and instantiates this class directly for provider <name>. The class must be a BaseChatModel subclass.Model constructor params
Any provider can use theparams table to pass extra arguments to the model constructor:
Per-model overrides
If a specific model needs different params, add a model-keyed sub-table underparams to override individual values without duplicating the entire provider config:
ollama:qwen3:4bgets{temperature: 0.5, num_ctx: 4000}— model overrides win.ollama:llama3gets{temperature: 0, num_ctx: 8192}— no override, provider-level params only.
CLI overrides with --model-params
For one-off adjustments without editing the config file, pass a JSON object via --model-params at launch or mid-session with the /model command:
Inside the TUI
--model-params cannot be combined with --default.
Profile overrides
(Advanced) Override fields in the model’s runtime profile to change how the CLI interprets model capabilities. The most common use case is loweringmax_input_tokens to trigger auto-summarization earlier — useful for testing or for constraining context usage:
params — the model-level value wins on conflict:
CLI profile overrides with --profile-override
(Advanced)
To override model profile fields at runtime without editing the config file, pass a JSON object via --profile-override:
--profile-override.
Custom base URL
Some provider packages accept abase_url to override the default endpoint. For example, langchain-ollama defaults to http://localhost:11434 via the underlying ollama client. To point it elsewhere, set base_url in your configuration:
Compatible APIs
For providers that expose APIs that are wire-compatible with OpenAI or Anthropic, you can use the existinglangchain-openai or langchain-anthropic packages by pointing base_url at the provider’s endpoint:
Any features added on top of the official spec by the provider will not be captured. If the provider offers a dedicated LangChain integration package, prefer that instead.
Adding models to the interactive switcher
Some providers (e.g.langchain-ollama) don’t bundle model profile data (see Provider reference for full listing). When this is the case, the interactive /model switcher won’t list models for that provider. You can fill in the gap by defining a models list in your config file for the provider:
/model switcher will now include an Ollama section with these models listed.
This is entirely optional. You can always switch to any model by specifying its full name directly:
Arbitrary providers
You can use any LangChainBaseChatModel subclass using class_path. The CLI will import and instantiate it directly:
deepagents-cli:
my_custom:my-model-v1 (via /model or --model), the model name (my-model-v1) is passed as the model kwarg:
_PROFILES dict in <package>.data._profiles in lieu of defining them under the models key. See LangChain model profiles for more info.
Hooks
Hooks let external programs react to CLI lifecycle events. Configure commands in~/.deepagents/hooks.json and the CLI pipes a JSON payload to each matching command’s stdin whenever an event fires.
Hooks run fire-and-forget in a background thread — they never block the CLI and failures are logged without interrupting your session.
Setup
Create~/.deepagents/hooks.json:
~/deepagents-events.log.
Hook configuration
The config file contains a singlehooks array. Each entry has:
| Field | Type | Required | Description |
|---|---|---|---|
command | list[str] | Yes | Command and arguments to run (no shell expansion — use ["bash", "-c", "..."] if needed) |
events | list[str] | No | Event names to subscribe to. Omit or leave empty to receive all events |
events filter, so it receives every event the CLI emits.
Payload format
Each hook command receives a JSON object on stdin with an"event" key plus event-specific fields:
Events reference
session.start
Fired when an agent session begins (both interactive and non-interactive modes).
| Field | Type | Description |
|---|---|---|
thread_id | string | The session thread identifier |
session.end
Fired when a session exits.
| Field | Type | Description |
|---|---|---|
thread_id | string | The session thread identifier |
user.prompt
Fired in interactive mode when the user submits a chat message.
No additional fields.
input.required
Fired when the agent requires human input (human-in-the-loop interrupt).
No additional fields.
permission.request
Fired before the approval dialog when one or more tool calls need user permission.
| Field | Type | Description |
|---|---|---|
tool_names | list[str] | Names of the tools requesting approval |
tool.error
Fired when a tool call returns an error.
| Field | Type | Description |
|---|---|---|
tool_names | list[str] | Names of the tool(s) that errored |
task.complete
Fired when the agent finishes its current task (the streaming loop ends without further interrupts).
| Field | Type | Description |
|---|---|---|
thread_id | string | The session thread identifier |
context.compact
Fired before the CLI compacts (summarizes) the conversation context.
No additional fields.
Execution model
- Background thread: Hook subprocesses run in a thread via
asyncio.to_threadso the main event loop is never blocked. - Concurrent dispatch: When multiple hooks match an event, they run concurrently in a thread pool.
- 5-second timeout: Each command has a 5-second timeout. Commands that exceed this are killed.
- Fire-and-forget: Errors are caught per-hook and logged at debug/warning level. A failing hook never crashes or stalls the CLI.
- Lazy loading: The config file is read once on the first event dispatch and cached for the rest of the session.
- No shell expansion: Commands are executed directly (not through a shell). Wrap in
["bash", "-c", "..."]if you need shell features like pipes or variable expansion.
Hook examples
Log all events to a file
Desktop notification on task completion (macOS)
Python handler
Write a handler script that reads the JSON payload from stdin:my_handler.py
~/.deepagents/hooks.json
Security considerations
Hooks follow the same trust model as Git hooks or shell aliases — any user who can write to~/.deepagents/hooks.json can execute arbitrary commands. This is by design:
- No command injection: Payload data flows only to stdin as JSON, never to command-line arguments.
json.dumpshandles escaping. - No shell by default: Commands run with
shell=False, preventing shell injection. - Malformed config: Invalid JSON or unexpected types produce logged warnings, not security issues.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

