> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Model providers

> Configure any LangChain-compatible model provider for the Deep Agents CLI

The Deep Agents CLI supports any [chat model provider compatible with LangChain](/oss/javascript/integrations/chat), unlocking use for virtually any LLM that supports tool calling. Any service that exposes an OpenAI-compatible or Anthropic-compatible API also works out of the box—see [Compatible APIs](/oss/javascript/deepagents/cli/configuration#compatible-apis).

## Quickstart

The CLI integrates automatically with the [following model providers](#provider-reference): no extra configuration needed beyond installing the relevant provider package.

1. **Install provider packages**

   Each model provider requires installing its corresponding LangChain integration package. These are available as optional extras when installing the CLI, done intentionally to keep the application lightweight:

   ```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
   # Quick install with chosen providers
   # OpenAI, Anthropic, and Gemini are included by default
   DEEPAGENTS_EXTRAS="baseten,groq" curl -LsSf https://langch.in/gh-da-cli | bash

   # Or install directly with uv
   uv tool install 'deepagents-cli[baseten,groq]'

   # Add additional packages at a later date
   uv tool install deepagents-cli --with langchain-ollama

   # All providers
   uv tool install 'deepagents-cli[anthropic,baseten,bedrock,cohere,deepseek,fireworks,google-genai,groq,huggingface,ibm,litellm,mistralai,nvidia,ollama,openai,openrouter,perplexity,vertexai,xai]'
   ```

2. **Set credentials**

   Store API keys in `~/.deepagents/.env` so they're available across all projects, or export them in your shell:

   <Tabs>
     <Tab title="OpenAI">
       <CodeGroup>
         ```bash Add permanently theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         mkdir -p ~/.deepagents
         echo 'OPENAI_API_KEY=your-api-key' >> ~/.deepagents/.env
         ```

         ```bash Add for current session theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         export OPENAI_API_KEY="your-api-key"
         ```
       </CodeGroup>
     </Tab>

     <Tab title="Anthropic">
       <CodeGroup>
         ```bash Add permanently theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         mkdir -p ~/.deepagents
         echo 'ANTHROPIC_API_KEY=your-api-key' >> ~/.deepagents/.env
         ```

         ```bash Add for current session theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         export ANTHROPIC_API_KEY="your-api-key"
         ```
       </CodeGroup>
     </Tab>

     <Tab title="Google">
       <CodeGroup>
         ```bash Add permanently theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         mkdir -p ~/.deepagents
         echo 'GOOGLE_API_KEY=your-api-key' >> ~/.deepagents/.env
         ```

         ```bash Add for current session theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
         export GOOGLE_API_KEY="your-api-key"
         ```
       </CodeGroup>
     </Tab>

     <Tab title="Other">
       The CLI works with any LLM that supports tool calling. See [Provider reference](#provider-reference) for the full list of supported providers and their required environment variables.
     </Tab>
   </Tabs>

   To configure model parameters, see [Model parameters](/oss/javascript/deepagents/cli/providers#model-parameters).

   You can also scope credentials to the CLI with the [`DEEPAGENTS_CLI_` prefix](/oss/javascript/deepagents/cli/configuration#deepagents_cli_-prefix).

## Provider reference

The Deep Agents CLI is built in Python, please use the [Python provider reference docs](https://docs.langchain.com/oss/python/deepagents/cli/providers#provider-reference).

### Model routers and proxies

Model routers like [OpenRouter](https://openrouter.ai/) and [LiteLLM](https://docs.litellm.ai/) provide access to models from multiple providers through a single endpoint.

Use the dedicated integration packages for these services:

| Router     | Package                                                                |
| ---------- | ---------------------------------------------------------------------- |
| OpenRouter | [`langchain-openrouter`](/oss/javascript/integrations/chat/openrouter) |

**OpenRouter** is a built-in provider—install the package and use it directly:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
uv tool install 'deepagents-cli[openrouter]'
```

**LiteLLM** is also a built-in provider:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
uv tool install 'deepagents-cli[litellm]'
```

## Switch models

To switch models in the CLI, either:

1. **Use the interactive model switcher** with the `/model` command. This displays available models sourced from each installed LangChain provider package's [model profiles](/oss/javascript/langchain/models#model-profiles).

   <Note>
     Not all models appear here. If yours is missing, pass the model name directly (e.g. `/model gpt-5.5`). See [Which models appear in the switcher](#which-models-appear-in-the-switcher) for details.
   </Note>
2. **Specify a model name directly** as an argument, e.g. `/model gpt-5.5`. You can use any model supported by the chosen provider, regardless of whether it appears in the list from option 1. The model name will be passed to the API request.
3. **Specify the model at launch** via `--model`, e.g.

   ```txt theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
   deepagents --model openai:gpt-5.5
   ```

<Accordion title="Model resolution order" icon="list-numbers">
  When the CLI launches, it resolves which model to use in the following order:

  1. **`--model` flag** always wins when provided.
  2. **`[models].default`** in `~/.deepagents/config.toml`—the user's intentional long-term preference.
  3. **`[models].recent`** in `~/.deepagents/config.toml`—the last model switched to via `/model`. Written automatically; never overwrites `[models].default`.
  4. **Environment auto-detection**: falls back to the first available startup credential, checked in order: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GOOGLE_API_KEY`, `GOOGLE_CLOUD_PROJECT` (Vertex AI).

  This startup fallback intentionally checks only those four credentials. Other supported providers (for example, Groq) are still available via `--model`, `/model`, and saved defaults (`[models].default` / `[models].recent`).
</Accordion>

### Which models appear in the switcher

The `/model` selector dynamically builds its list from installed provider packages. Expand below for the full criteria and troubleshooting.

<Accordion title="How the switcher builds its model list" icon="list-search">
  The interactive `/model` selector builds its list dynamically—it is not a hardcoded list baked into the CLI. A model appears in the switcher when **all** of the following are true:

  1. **The provider package is installed.** Each provider (e.g. `langchain-anthropic`, `langchain-openai`) must be installed alongside `deepagents-cli`—either as an [install extra](/oss/javascript/deepagents/cli/providers#quickstart) (e.g. `uv tool install 'deepagents-cli[ollama]'`) or added later with `uv tool install deepagents-cli --with <package>`. If a package is missing, its entire provider section is absent from the switcher.
  2. **The model has a profile with `tool_calling` enabled.** The CLI requires tool-calling support, so models without `tool_calling: true` in their profile are excluded. This is the most common reason a model is missing from the list. For providers that don't bundle profiles (see the [Provider reference](#provider-reference) table), you can define one in `config.toml`:

     ```toml theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
     [models.providers.ollama.profile."qwen3:4b"]
     tool_calling = true
     max_input_tokens = 32768
     max_output_tokens = 8192
     ```

     This is not strictly required for the model to appear in the switcher — adding it to the [`models` list](/oss/javascript/deepagents/cli/configuration#adding-models-to-the-interactive-switcher) also works and is simpler. A profile is useful when you want the CLI to know the model's context window and capabilities for features like auto-summarization. See [Profile overrides](/oss/javascript/deepagents/cli/configuration#profile-overrides-advanced) for all overridable fields.
  3. **The model accepts and produces text.** Models whose profile explicitly sets `text_inputs` or `text_outputs` to `false` (e.g. embedding or image-generation models) are excluded.

  Models defined in `config.toml` under [`[models.providers.<name>].models`](/oss/javascript/deepagents/cli/configuration#adding-models-to-the-interactive-switcher) bypass the profile filter—they always appear in the switcher regardless of profile metadata. This is the recommended way to add models that are missing from the list.

  <Tip>
    Credential status does **not** affect whether a model is listed. The switcher shows all qualifying models and displays a credential indicator next to each provider header: a checkmark for confirmed credentials, a warning for missing credentials, or a question mark when credential status is unknown. You can still select a model with missing credentials—the provider will report an authentication error at request time.
  </Tip>

  #### Troubleshooting missing models

  | Symptom                                   | Likely cause                                                 | Fix                                                                                                               |
  | ----------------------------------------- | ------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
  | Entire provider missing from switcher     | Provider package not installed                               | Install the package (e.g. `uv tool install deepagents-cli --with langchain-groq`)                                 |
  | Provider shown but specific model missing | Model profile has `tool_calling: false` or no profile exists | Add the model to `[models.providers.<name>].models` in `config.toml`, or use `/model <provider>:<model>` directly |
  | Provider shows ⚠ "missing credentials"    | API key env var not set                                      | Set the credential env var from the [Provider reference](#provider-reference) table                               |
  | Provider shows ? "credentials unknown"    | Provider uses non-standard auth that the CLI can't verify    | Credentials may still work—try switching to the model. If auth fails, check the provider's docs                   |
</Accordion>

### Set a default model

You can set a persistent default model that will be used for all future CLI launches:

* **Via model selector:** Open `/model`, navigate to the desired model, and press `Ctrl+S` to pin it as the default. Pressing `Ctrl+S` again on the current default clears it.
* **Via command:** `/model --default provider:model` (e.g., `/model --default anthropic:claude-opus-4-7`)
* **Via config file:** Set `[models].default` in `~/.deepagents/config.toml` (see [Configuration](/oss/javascript/deepagents/cli/configuration)).
* **From the shell:**

  ```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  deepagents --default-model anthropic:claude-opus-4-7
  ```

To view the current default:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
deepagents --default-model
```

To clear the default:

* **From the shell:**

  ```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  deepagents --clear-default-model
  ```

* **Via command:** `/model --default --clear`

* **Via model selector:** Press `Ctrl+S` on the currently pinned default model.

Without a default, the CLI will default to the most recently used model.

### Model parameters

Pass extra constructor kwargs to the model—sampling controls, reasoning/thinking budgets, context window sizes, request timeouts, and anything else the underlying chat-model class accepts. Three places to set them, in priority order (highest first):

1. **One-off at launch with `--model-params`.** JSON string, session-only:

   ```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
   # OpenAI reasoning effort
   deepagents --model openai:gpt-5.5 --model-params '{"reasoning": {"effort": "high"}}'

   # Anthropic extended thinking
   deepagents --model anthropic:claude-opus-4-7 --model-params '{"thinking": {"type": "enabled", "budget_tokens": 10000}, "max_tokens": 16000}'
   ```

2. **Mid-session via `/model --model-params`.** Same JSON syntax—swaps params (and optionally the model) without restarting:

   ```txt theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
   /model --model-params '{"temperature": 0.7}' anthropic:claude-opus-4-7
   /model --model-params '{"num_ctx": 16384}'           # opens selector, applies params to choice
   ```

3. **Persistent in `config.toml`.** Provider-level defaults (with optional per-model sub-tables) that apply on every launch:

   ```toml theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
   [models.providers.anthropic.params]
   thinking = { type = "enabled", budget_tokens = 10000 }
   max_tokens = 16000

   [models.providers.openai.params]
   reasoning = { effort = "high", summary = "auto" }
   output_version = "responses/v1"

   [models.providers.ollama.params]
   num_ctx = 16384
   temperature = 0

   # Per-model override—wins over provider-level keys
   [models.providers.ollama.params."qwen3:4b"]
   temperature = 0.5
   ```

CLI flags override config-file `params` and are session-only (mid-session changes are not persisted). Per-model sub-tables in `config.toml` override provider-level keys (shallow merge—see [Model constructor params](/oss/javascript/deepagents/cli/configuration#model-constructor-params) for full semantics). `--model-params` cannot be combined with `--default`.

<Tip>
  Any kwarg accepted by the underlying chat-model constructor is valid. Refer to the provider's reference docs for the full list—e.g. [`ChatAnthropic`](https://reference.langchain.com/python/langchain-anthropic/langchain_anthropic/chat_models/ChatAnthropic), [`ChatOpenAI`](https://reference.langchain.com/python/langchain-openai/langchain_openai/chat_models/base/ChatOpenAI), [`ChatOllama`](https://reference.langchain.com/python/langchain-ollama/langchain_ollama/chat_models/ChatOllama). Unknown kwargs are forwarded to the upstream API request, so newly released parameters work without a CLI update.
</Tip>

<Note>
  Don't put credentials (`api_key`) in `params`—use [`api_key_env`](/oss/javascript/deepagents/cli/configuration#provider-configuration) to point at an environment variable instead.
</Note>

To override fields on the model's runtime *profile* (`max_input_tokens`, `tool_calling`, capability flags)—distinct from constructor params—see [Profile overrides](/oss/javascript/deepagents/cli/configuration#profile-overrides-advanced).

## Advanced configuration

For detailed configuration of provider params, profile overrides, custom base URLs, compatible APIs, arbitrary providers, and lifecycle hooks, see [Configuration](/oss/javascript/deepagents/cli/configuration).

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/oss/deepagents/cli/providers.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
