One API for any model
Every LangChain chat model, regardless of provider, implements the same interface. This means you can:- Swap providers without rewriting application logic
- Compare models side-by-side with identical code
- Use advanced features like tool calling, structured output, and streaming across all providers
What is a provider?
A provider is a company or platform that hosts AI models and exposes them through an API. Examples include OpenAI, Anthropic, Google, and AWS Bedrock. In LangChain, each provider has a dedicated integration package (for examplelangchain-openai, langchain-anthropic) that implements the standard LangChain interface for that provider’s models. This means:
- Dedicated packages for each provider with proper versioning and dependency management
- Provider-specific features are available when you need them (for example OpenAI’s Responses API, Anthropic’s extended thinking)
- Automatic API key handling through environment variables
Find model names
Each provider supports specific model names that you pass when initializing a chat model. There are two ways to specify a model:init_chat_model with the provider:model format, LangChain automatically resolves the provider and loads the correct integration package. You can also omit the provider prefix if the model name is unambiguous (e.g., "gpt-5.4" resolves to OpenAI).
To find available model names for a provider, refer to the provider’s own documentation. Here are some popular providers:
Use new models immediately
Because LangChain provider packages pass model names directly to the provider’s API, you can use new models the moment a provider releases them — no LangChain update required. Simply pass the new model name:Model capabilities
Different providers and models support different features. For a list of the chat model integrations and their capabilities, see the chat models integrations page.Routers and proxies
Routers (also called proxies or gateways) give you access to models from multiple providers through a single API and credential. They can simplify billing, let you switch between models without changing integrations, and offer features like automatic fallbacks and load balancing.| Provider | Integration | Description |
|---|---|---|
| OpenRouter | ChatOpenRouter | Unified access to models from OpenAI, Anthropic, Google, Meta, and more |
| LiteLLM | ChatLiteLLM | Unified interface for 100+ providers with routing, fallbacks, and spend tracking |
- Access many providers with a single API key and billing account
- Switch models dynamically without managing multiple provider credentials
- Use fallback models that automatically retry with a different model if the primary one fails
OpenAI-compatible endpoints
Many providers offer endpoints compatible with OpenAI’s Chat Completions API. You can connect to these usingChatOpenAI with a custom base_url:
Next steps
Models guide
Learn how to use models: invoke, stream, batch, tool calling, and more.
Chat model integrations
Browse all chat model integrations and their capabilities.
All providers
See the full list of provider packages and integrations.
Agents
Build agents that use models as their reasoning engine.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

