Quick start
The CLI integrates automatically with the following model providers — no extra configuration needed beyond installing the relevant provider package.-
Install provider packages
Each model provider requires installing its corresponding LangChain integration package. These are available as optional extras when installing the CLI:
- Set your API key Most providers require an API key. Set the appropriate environment variable listed in the table below. Refer to each integration package’s docs for details.
Provider reference
Using a provider not listed here? See Arbitrary providers — any LangChain-compatible provider can be used in the CLI with some additional setup.| Provider | Package | API key env var | Model profiles |
|---|---|---|---|
| OpenAI | langchain-openai | OPENAI_API_KEY | ✅ |
| Azure OpenAI | langchain-openai | AZURE_OPENAI_API_KEY | ✅ |
| Anthropic | langchain-anthropic | ANTHROPIC_API_KEY | ✅ |
| Google Gemini API | langchain-google-genai | GOOGLE_API_KEY | ✅ |
| Google Vertex AI | langchain-google-vertexai | GOOGLE_CLOUD_PROJECT | ✅ |
| AWS Bedrock | langchain-aws | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY | ✅ |
| AWS Bedrock Converse | langchain-aws | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY | ✅ |
| Hugging Face | langchain-huggingface | HUGGINGFACEHUB_API_TOKEN | ✅ |
| Ollama | langchain-ollama | Optional | ❌ |
| Groq | langchain-groq | GROQ_API_KEY | ✅ |
| Cohere | langchain-cohere | COHERE_API_KEY | ❌ |
| Fireworks | langchain-fireworks | FIREWORKS_API_KEY | ✅ |
| Together | langchain-together | TOGETHER_API_KEY | ❌ |
| Mistral AI | langchain-mistralai | MISTRAL_API_KEY | ✅ |
| DeepSeek | langchain-deepseek | DEEPSEEK_API_KEY | ✅ |
| IBM (watsonx.ai) | langchain-ibm | WATSONX_APIKEY | ❌ |
| Nvidia | langchain-nvidia-ai-endpoints | NVIDIA_API_KEY | ❌ |
| xAI | langchain-xai | XAI_API_KEY | ✅ |
| Perplexity | langchain-perplexity | PPLX_API_KEY | ✅ |
Switching models
To switch models in the CLI, either:-
Use the interactive model switcher with the
/modelcommand. This displays a hardcoded list of known model profiles sourced from each LangChain provider package.Note that these profiles are not an exhaustive list of available models. If the model you want isn’t shown, use option 2 instead (useful for newly released models that haven’t been added to the profiles yet). -
Specify a model name directly as an argument, e.g.
/model openai:gpt-4o. You can use any model supported by the chosen provider, regardless of whether it appears in the list from option 1. The model name will be passed to the API request. -
Specify the model at launch via
--model, e.g.
Setting a default model
You can set a persistent default model that will be used for all future CLI launches:-
Via model selector: Open
/model, navigate to the desired model, and pressCtrl+Sto pin it as the default. PressingCtrl+Sagain on the current default clears it. -
Via command:
/model --default provider:model(e.g.,/model --default anthropic:claude-opus-4-6) -
Via config file: Set
[models].defaultin~/.deepagents/config.toml(see Config file). -
From the shell:
-
From the shell:
-
Via command:
/model --default --clear -
Via model selector: Press
Ctrl+Son the currently pinned default model.
Model resolution order
When the CLI launches, it resolves which model to use in the following order:--modelflag always wins when provided.[models].defaultin~/.deepagents/config.toml— The user’s intentional long-term preference.[models].recentin~/.deepagents/config.toml— The last model switched to via/model. Written automatically; never overwrites[models].default.- Environment auto-detection — Falls back to the first provider with a valid API key, checked in order:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,GOOGLE_CLOUD_PROJECT(Vertex AI).
Config file
The Deep Agents CLI offers extending and modifying individual model & provider config via~/.deepagents/config.toml.
Each provider is a TOML table under the [models.providers] namespace:
A list of model names to show in the interactive
/model switcher for this provider. For providers that already ship with model profiles, any names you add here appear alongside the bundled ones — useful for newly released models that haven’t been added to the package yet. For arbitrary providers, this list is the only source of models in the switcher.This key is optional. You can always pass any model name directly to /model or --model regardless of whether it appears in the switcher; the provider validates the name at request time.Optionally override the environment variable name checked for credentials.
Optionally override the base URL used by the provider, if supported. Refer to your provider packages’ reference docs for more info.
Extra keyword arguments forwarded to the model constructor. Flat keys (e.g.,
temperature = 0) apply to every model from this provider. Model-keyed sub-tables (e.g., [params."gpt-4o"]) override individual values for that model only; the merge is shallow (model wins on conflict).Used for arbitrary model providers. Optional fully-qualified Python class in
module.path:ClassName format. When set, the CLI imports and instantiates this class directly for provider <name>. The class must be a BaseChatModel subclass.~/.deepagents/config.toml — either by editing the file directly, using /model --default, or selecting a default in the interactive model switcher.
[models].default always takes priority over [models].recent. The /model command only writes to [models].recent, so your configured default is never overwritten by mid-session switches. To remove the default, use /model --default --clear or delete the default key from the config file.
Examples
Model constructor params
Any provider can use theparams table to pass extra arguments to the model constructor:
Per-model overrides
If a specific model needs different params, add a model-keyed sub-table underparams to override individual values without duplicating the entire provider config:
ollama:qwen3:4bgets{temperature: 0.5, num_ctx: 4000}— model overrides win.ollama:llama3gets{temperature: 0, num_ctx: 8192}— no override, provider-level params only.
CLI overrides with --model-params
For one-off adjustments without editing the config file, pass a JSON object via --model-params:
Specifying custom base_url
Some provider packages accept a base_url to override the default endpoint. For example, langchain-ollama defaults to http://localhost:11434 via the underlying ollama client. To point it elsewhere, set base_url in your configuration:
Compatible APIs
Many LLM providers expose APIs that are wire-compatible with OpenAI or Anthropic. You can use these with the existinglangchain-openai or langchain-anthropic packages by pointing base_url at the provider’s endpoint. Note that any features added on top of the spec by providers may not be captured.
For example, to use an OpenAI-compatible provider:
Adding models to the interactive switcher
Some providers (e.g.langchain-ollama) — don’t bundle model profile data (see Provider reference for full listing). When this is the case, the interactive /model switcher won’t list models for that provider. You can fill in the gap by defining a models list in your config file for the provider:
/model switcher will now include an Ollama section with these models listed.
This is entirely optional. You can always switch to any model by specifying its full name directly:
Arbitrary providers
You can use any LangChainBaseChatModel subclass using class_path. The CLI will import and instantiate it directly:
deepagents-cli:
my_custom:my-model-v1 (via /model or --model), the model name (my-model-v1) is passed as the model kwarg:
_PROFILES dict in <package>.data._profiles in lieu of defining them under the models key. See LangChain model profiles for more info.