LangGraph provides built-in support for LLMs (language models) via the LangChain library. This makes it easy to integrate various LLMs into your agents and workflows.

Initialize a model

Use model provider classes to initialize models:
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
  temperature: 0,
});
Tool calling support If you are building an agent or workflow that requires the model to call external tools, ensure that the underlying language model supports tool calling. Compatible models can be found in the LangChain integrations directory.

Use in an agent

When using createReactAgent you can pass the model instance directly:
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const model = new ChatOpenAI({
  model: "gpt-4o",
  temperature: 0,
});

const agent = createReactAgent({
  llm: model,
  tools: tools,
});

Advanced model configuration

Disable streaming

To disable streaming of the individual LLM tokens, set streaming: false when initializing the model:
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
  streaming: false,
});

Add model fallbacks

You can add a fallback to a different model or a different LLM provider using model.withFallbacks([...]):
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";

const modelWithFallbacks = new ChatOpenAI({
  model: "gpt-4o",
}).withFallbacks([
  new ChatAnthropic({
    model: "claude-3-5-sonnet-20240620",
  }),
]);
See this guide for more information on model fallbacks.

Bring your own model

If your desired LLM isn’t officially supported by LangChain, consider these options:
  1. Implement a custom LangChain chat model: Create a model conforming to the LangChain chat model interface. This enables full compatibility with LangGraph’s agents and workflows but requires understanding of the LangChain framework.
  2. Direct invocation with custom streaming: Use your model directly by adding custom streaming logic with StreamWriter. Refer to the custom streaming documentation for guidance. This approach suits custom workflows where prebuilt agent integration is not necessary.

Additional resources