Skip to main content
Chat models are language models that use a sequence of messages as inputs and return messages as outputs .

Install and use

Install:
npm i @langchain/openai
Add environment variables:
OPENAI_API_KEY=your-api-key
Instantiate the model:
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o-mini" });
await model.invoke("Hello, world!")
Install:
npm i @langchain/anthropic
Add environment variables:
ANTHROPIC_API_KEY=your-api-key
Instantiate the model:
import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
model: "claude-3-sonnet-20240620",
temperature: 0
});
await model.invoke("Hello, world!")
Install:
npm i @langchain/google-genai
Add environment variables:
GOOGLE_API_KEY=your-api-key
Instantiate the model:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const model = new ChatGoogleGenerativeAI({
modelName: "gemini-2.5-flash-lite",
temperature: 0
});
await model.invoke("Hello, world!")
Install:
npm i @langchain/google-vertexai
Add environment variables:
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model:
import { ChatVertexAI } from "@langchain/google-vertexai";

const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
await model.invoke("Hello, world!")
Install:
npm i @langchain/mistralai
Add environment variables:
MISTRAL_API_KEY=your-api-key
Instantiate the model:
import { ChatMistralAI } from "@langchain/mistralai";

const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
await model.invoke("Hello, world!")
Install:
npm i @langchain/community
Add environment variables:
FIREWORKS_API_KEY=your-api-key
Instantiate the model:
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";

const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
await model.invoke("Hello, world!")
Install:
npm i @langchain/groq
Add environment variables:
GROQ_API_KEY=your-api-key
Instantiate the model:
import { ChatGroq } from "@langchain/groq";

const model = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
await model.invoke("Hello, world!")
ModelStreamJSON modeTool CallingwithStructuredOutput()Multimodal
BedrockChatβœ…βŒπŸŸ‘ (Bedrock Anthropic only)🟑 (Bedrock Anthropic only)🟑 (Bedrock Anthropic only)
ChatBedrockConverseβœ…βŒβœ…βœ…βœ…
ChatAnthropicβœ…βŒβœ…βœ…βœ…
ChatCloudflareWorkersAIβœ…βŒβŒβŒβŒ
ChatCohereβœ…βŒβœ…βœ…βœ…
ChatFireworksβœ…βœ…βœ…βœ…βœ…
ChatGoogleGenerativeAIβœ…βŒβœ…βœ…βœ…
ChatVertexAIβœ…βŒβœ…βœ…βœ…
ChatGroqβœ…βœ…βœ…βœ…βœ…
ChatMistralAIβœ…βœ…βœ…βœ…βœ…
ChatOllamaβœ…βœ…βœ…βœ…βœ…
ChatOpenAIβœ…βœ…βœ…βœ…βœ…
ChatTogetherAIβœ…βœ…βœ…βœ…βœ…
ChatXAIβœ…βœ…βœ…βœ…βŒ

Chat Completions API

Certain model providers offer endpoints that are compatible with OpenAI’s (legacy) Chat Completions API. In such case, you can use ChatOpenAI with a custom base_url to connect to these endpoints.
To use OpenRouter, you will need to sign up for an account and obtain an API key.
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
    model: "...", // Specify a model available on OpenRouter
    configuration: {
    apiKey: "OPENROUTER_API_KEY",
    baseURL: "https://openrouter.ai/api/v1",
    }
});
Refer to the OpenRouter documentation for more details.

All chat models

If you’d like to contribute an integration, see Contributing integrations.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.