This will help you get started with OpenRouter chat models. OpenRouter is a unified API that provides access to models from multiple providers (OpenAI, Anthropic, Google, Meta, and more) through a single endpoint. For a full list of available models, visit the OpenRouter models page.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Integration details
| Class | Package | Serializable | PY support | Downloads | Version |
|---|---|---|---|---|---|
ChatOpenRouter | @langchain/openrouter | ✅ | ✅ |
Model features
| Tool calling | Structured output | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Setup
To access models via OpenRouter you’ll need to create an OpenRouter account, get an API key, and install the@langchain/openrouter integration package.
Credentials
Head to the OpenRouter keys page to sign up and generate an API key. Once you’ve done this set theOPENROUTER_API_KEY environment variable:
Installation
The LangChain OpenRouter integration lives in the@langchain/openrouter package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Streaming
Tool calling
OpenRouter uses the OpenAI-compatible tool calling format. You can describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.Bind tools
WithChatOpenRouter.bindTools, you can pass in Zod schemas, LangChain tools, or raw function definitions as tools to the model. Under the hood these are converted to OpenAI tool schemas and passed in every model invocation.
Strict mode
Passstrict: true to guarantee that model output exactly matches the JSON Schema provided in the tool definition:
Structured output
ChatOpenRouter supports structured output via the .withStructuredOutput() method. The extraction strategy is chosen automatically based on model capabilities:
jsonSchema—native JSON Schema response format (used when the model supports it)functionCalling—wraps the schema as a tool call (default fallback)jsonMode—asks the model to respond in JSON without strict schema constraints
When multi-model routing is active (
models list or route: "fallback"), the method always falls back to functionCalling because the actual backend model’s capabilities are unknown at request time.strict: true with the jsonSchema and functionCalling methods to enforce exact schema adherence:
Multimodal inputs
OpenRouter supports multimodal inputs for models that accept them. The available modalities depend on the model you select—check the OpenRouter models page for details.Not all models support all modalities. Check the OpenRouter models page for model-specific support.
Image input
Provide image inputs along with text using a list content format.Token usage metadata
After an invocation, token usage information is available on theusage_metadata attribute of the response:
output_token_details.reasoning—tokens used for internal chain-of-thought reasoninginput_token_details.cache_read—input tokens served from prompt cache
Provider routing
Many models on OpenRouter are served by multiple providers. Theprovider parameter gives you control over which providers handle your requests and how they’re selected.
Order and filter providers
Useorder to set a preferred provider sequence. OpenRouter tries each provider in order and falls back to the next if one is unavailable:
only. To exclude certain providers, use ignore:
Sort by cost, speed, or latency
By default, OpenRouter load-balances across providers with a preference for lower cost. Usesort to change the priority:
Data collection policy
If your use case requires that providers do not store or train on your data, setdata_collection to "deny":
Filter by quantization
For open-weight models, you can restrict routing to specific precision levels:Combine options
Provider options can be composed together:Multi-model routing
OpenRouter supports routing requests across multiple models. Pass amodels array and an optional route strategy:
Plugins
OpenRouter supports plugins that extend model capabilities. Pass plugin configurations via theplugins parameter:
web (web search), file-parser (PDF parsing), moderation, auto-router, and response-healing.
App attribution
OpenRouter supports app attribution via HTTP headers. Set these through constructor params:API reference
For detailed documentation of allChatOpenRouter features and configurations, head to the ChatOpenRouter API reference.
For more information about OpenRouter’s platform, models, and features, see the OpenRouter documentation.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

