Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the current LangChain Python or LangChain JavaScript docs.

Overview

LLMs are machine learning models that can interpret and generate text like humans do. They’re versatile enough to write content, translate languages, summarize information, and answer questions without needing special training for each task. In addition to text generation, many models support other interaction patterns, such as:
  • Tool calling - where models can call external tools (like database queries or API calls) and use the results in their responses.
  • Structured output - where the model is constrained to respond with an object that matches a schema.
  • Multimodal - where models can process and return data other than text, such as images, audio, and video.
  • Reasoning - where models are able to perform multi-step reasoning to arrive at a conclusion.
Providers like OpenAI, Anthropic, and Google offer access to a range of LLMs each carrying their own strengths and weaknesses. Every provider has its own unique interaction patterns for accessing and using models, typically through a user interface or an API where messages are exchanged between a user and the model. Because each provider has its own opinionated way of interacting with models, it can make it difficult to swap between models and providers without significant code changes. LangChain simplifies this by providing a chat model abstraction to interact with models across a number of providers and models. This lets you focus on the logic of your application rather than the implementation details of the underlying provider API.

Basic usage

The easiest way to get started with a model in LangChain is to use init_chat_model to initialize one from a model provider of your choice.
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-5-nano")
response = model.invoke("Why do parrots talk?")
See init_chat_model for more detail.

Key methods

Invoke

The model takes messages as input and returns messages after generating a full response.

Stream

Invoke the model, but stream the response as it is generated in real-time.

Batch

Send multiple requests to a model in a batch for more efficient processing.
In addition to chat models, LangChain provides support for other adjacent technologies, such as embedding models and vector stores. See the integrations page for details.

Parameters

A chat model takes parameters that can be used to configure its behavior. The full set of supported parameters vary by model and provider, but common ones include:
model
string
required
The name or identifier of the specific model you want to use with a provider.
api_key
string
The key required for authenticating with the model’s provider. This is usually issued when you sign up for access to the model. Can often be accessed by setting an .
temperature
number
Controls the randomness of the model’s output. A higher number makes responses more creative, while a lower one makes them more deterministic.
stop
string[]
A sequence of characters that indicates when the model should stop generating its output. String or list of strings.
timeout
number
The maximum time (in seconds) to wait for a response from the model before canceling the request.
max_tokens
number
Limits the total number of in the response, effectively controlling how long the output can be.
max_retries
number
The maximum number of attempts the system will make to resend a request if it fails due to issues like network timeouts or rate limits.
To find all the parameters supported by a given chat model, head to the Reference.

Invocation

A chat model must be invoked to generate an output. There are three main invocation methods, each suited to different use cases.
Each invocation method has an equivalent, typically prefixed with the letter 'a'For example: ainvoke(), astream(), abatch()A full list of async methods can be found in the reference.

Invoke

The most straightforward way to call a model is to use invoke() with a single message or a list of messages.
Single message
response = model.invoke("Why do parrots have colorful feathers?")
print(response)
A list of messages can be provided to a model to represent conversation history. Each message has a role that models use to indicate who sent the message in the conversation. See the messages guide for more detail on roles, types, and content.
Conversation history
from langchain.messages import HumanMessage, AIMessage, SystemMessage

conversation = [
    SystemMessage("You are a helpful assistant that translates English to French."),
    HumanMessage("Translate: I love programming."),
    AIMessage("J'adore la programmation."),
    HumanMessage("Translate: I love building applications.")
]

response = model.invoke(conversation)
print(response)  # AIMessage("J'adore créer des applications.")

Stream

Most models can stream their output content while it is being generated. By displaying output progressively, streaming significantly improves user experience, particularly for longer responses. Calling stream() returns an that yields output chunks as they are produced. You can use a loop to process each chunk in real-time:
for chunk in model.stream("Why do parrots have colorful feathers?"):
    print(chunk.text, end="|", flush=True)
As opposed to invoke(), which returns a single AIMessage after the model has finished generating its full response, stream() returns multiple AIMessageChunk objects, each containing a portion of the output text. Importantly, each AIMessageChunk in a stream is designed to be gathered into a full message via summation:
Construct AIMessage
full = None  # None | AIMessageChunk
for chunk in model.stream("What color is the sky?"):
    full = chunk if full is None else full + chunk
    print(full.text)

# The
# The sky
# The sky is
# The sky is typically
# The sky is typically blue
# ...

print(full.content_blocks)
# [{"type": "text", "text": "The sky is typically blue..."}]
The resulting full message can be treated the same as a message that was generated with invoke - for example, it can be aggregated into a message history and passed back to the model as conversational context.
Streaming only works if all steps in the program know how to process an stream of chunks. For instance, an application that isn’t streaming-capable would be one that needs to store the entire output in memory before it can be processed.

Batch

This section describes a chat model method batch(), which parallelizes model calls client-side. It is distinct from batch APIs supported by inference providers.
Batching a collection of independent requests to a model can significantly improve performance, as the processing can be done in parallel:
Batch
responses = model.batch([
    "Why do parrots have colorful feathers?",
    "How do airplanes fly?",
    "What is quantum computing?"
])
for response in responses:
    print(response)
By default, batch() will only return the final output for the entire batch. If you want to receive the output for each individual input as it is finishes generating, you can stream results with batch_as_completed():
Yield responses upon completion
for response in model.batch_as_completed([
    "Why do parrots have colorful feathers?",
    "How do airplanes fly?",
    "What is quantum computing?"
]):
    print(response)
When using batch_as_completed(), results may arrive out of order. Each includes the input index for matching to reconstruct the original order if needed.
When processing a large number of inputs using batch() or batch_as_completed(), you may want to control the maximum number of parallel calls. This can be done by setting the max_concurrency attribute in the RunnableConfig dictionary.
Batch with max concurrency
model.batch(
    list_of_inputs,
    config={
        'max_concurrency': 5,  # Limit to 5 parallel calls
    }
)
See the RunnableConfig reference for a full list of supported attributes.
For more details on batching, see the reference.

Context windows

Each model has a that determines how much text it can process at once. To learn more, see our guide on context engineering for strategies to effectively manage context.

Model capabilities

Tool calling

Models can request to call tools that perform tasks such as fetching data from a database, searching the web, or running code. Tools are pairings of:
  1. A schema, including the name of the tool, a description, and/or argument definitions (often defined with something like JSON schema)
  2. A function or coroutine to execute.
You will sometimes hear the term function calling. We use this term interchangeably with tool calling.
To make tools that you have defined available for use by a model, you must bind them using bind_tools(). In subsequent invocations, the model can choose to call any of the bound tools as needed. Some model providers offer built-in tools that can be enabled via model parameters. Check the respective provider reference for details.
See the tools guide for details and other options for creating tools.
Binding user tools
from langchain_core.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the weather at a location."""
    return f"It's sunny in {location}."


model_with_tools = model.bind_tools([get_weather])

response = model_with_tools.invoke("What's the weather like in Boston?")
for tool_call in response.tool_calls:
    # View tool calls made by the model
    print(f"Tool: {tool_call['name']}")
    print(f"Args: {tool_call['args']}")
When binding user-defined tools, the model’s response includes a request to execute a tool. It is up to you to perform the requested action and return the result back to the model for use in subsequent reasoning. The LangGraph framework simplifies the orchestration of model and tool calls. See the agent guides to get started. Below, we show some simple patterns for illustration.

Structured outputs

Models can be requested to provide their response in a format matching a given schema. This is useful for ensuring the output can be easily parsed and used in subsequent processing. LangChain supports multiple schema types and methods for enforcing structured outputs.
Pydantic models provide the richest feature set with field validation, descriptions, and nested structures.
from pydantic import BaseModel, Field

class Movie(BaseModel):
    """A movie with details."""
    title: str = Field(..., description="The title of the movie")
    year: int = Field(..., description="The year the movie was released")
    director: str = Field(..., description="The director of the movie")
    rating: float = Field(..., description="The movie's rating out of 10")

model_with_structure = model.with_structured_output(Movie)
response = model_with_structure.invoke("Provide details about the movie Inception")
print(response)  # Movie(title="Inception", year=2010, director="Christopher Nolan", rating=8.8)
Key considerations for structured outputs:
  • Method parameter: Some providers support different methods ('json_schema', 'function_calling', 'json_mode')
  • Include raw: Use include_raw=True to get both the parsed output and the raw AI message
  • Validation: Pydantic models provide automatic validation, while TypedDict and JSON Schema require manual validation

Multimodal

Certain models can process and return non-textual data such as images, audio, and video. You can pass non-textual data to a model by providing content blocks
All LangChain chat models with underlying multimodal capabilities support:
  1. Data in the cross-provider standard format (shown below)
  2. OpenAI chat completions format
  3. Any format that is native to that specific provider (e.g., Anthropic models accept Anthropic native format)
# From URL
response = model.invoke([
    {"type": "text", "text": "Describe the content of this image."},
    {"type": "image", "url": "https://example.com/path/to/image.jpg"},
])

# From base64 data
response = model.invoke([
    {"type": "text", "text": "Describe the content of this image."},
    {
        "type": "image",
        "base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
        "mime_type": "image/jpeg",
    },
])

# From provider-managed File ID
response = model.invoke([
    {"type": "text", "text": "Describe the content of this image."},
    {"type": "image", "file_id": "file-abc123"},
])
Some models can also return multimodal data as part of their response. In such cases, the resulting AIMessage will have content blocks with multimodal types.
Multimodal output
response = model.invoke("Create a picture of a cat")
print(response.content_blocks)
# [
#     {"type": "text", "text": "Here's a picture of a cat"},
#     {"type": "image", "base64": "...", "mime_type": "image/jpeg"},
# ]
See the integrations page for details on specific providers.

Reasoning

Newer models are capable of performing multi-step reasoning to arrive at a conclusion. This involves breaking down complex problems into smaller, more manageable steps. If supported by the underlying model, you can surface this reasoning process to better understand how the model arrived at its final answer.
for chunk in model.stream("Why do parrots have colorful feathers?"):
    reasoning_steps = [r for r in chunk.content_blocks if r["type"] == "reasoning"]
    print(reasoning_steps if reasoning_steps else chunk.text)
Depending on the model, you can sometimes specify the level of effort it should put into reasoning. Alternatively, you can request that the model turn off reasoning entirely. This may take the form of categorical “tiers” of reasoning (e.g., 'low' or 'high') or integer token budgets. For details, see the relevant chat model in the integrations page.

Supported models

LangChain supports all major model providers, including OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more. Each provider offers a variety of models with different capabilities. For a full list of supported models in LangChain, see the integrations page.

Advanced configuration

Local models

LangChain supports running models locally on your own hardware. This is useful for scenarios where data privacy is critical, or when you want to avoid the cost of using a cloud-based model. Ollama is one of the easiest ways to run models locally. See the full list of local integrations on the integrations page.

Caching

Chat model APIs can be slow and expensive to call. To help mitigate this, LangChain provides an optional caching layer for chat model integrations.

Rate limiting

Many chat model providers impose a limit on the number of invocations that can be made in a given time period. If you hit a rate limit, you will typically receive a rate limit error response from the provider, and will need to wait before making more requests. To help manage rate limits, chat model integrations accept a rate_limiter parameter that can be provided during initialization to control the rate at which requests are made.

Base URL or proxy

For many chat model integrations, you can configure the base URL for API requests, which allows you to use model providers that have OpenAI-compatible APIs or to use a proxy server.

Log probabilities

Certain models can be configured to return token-level log probabilities representing the likelihood of a given token. Accessing them is as simple as setting the logprobs parameter when initializing a model:
Log probs
model = init_chat_model(
    model="gpt-4o", 
    model_provider="openai"
).bind(logprobs=True)

response = model.invoke("Why do parrots talk?")
print(response.response_metadata["logprobs"])

Token usage

A number of model providers return token usage information as part of the invocation response. When available, this information will be included on the AIMessage objects produced by the corresponding model. For more details, see the messages guide.
Some provider APIs, notably OpenAI and Azure OpenAI chat completions, require users opt-in to receiving token usage data in streaming contexts. See this section of the integration guide for details.
You can track aggregate token counts across models in an application using either a callback or context manager, as shown below:
Callback handler
from langchain.chat_models import init_chat_model
from langchain_core.callbacks import UsageMetadataCallbackHandler

llm_1 = init_chat_model(model="openai:gpt-4o-mini")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")

callback = UsageMetadataCallbackHandler()
result_1 = llm_1.invoke("Hello", config={"callbacks": [callback]})
result_2 = llm_2.invoke("Hello", config={"callbacks": [callback]})
callback.usage_metadata
{
    'gpt-4o-mini-2024-07-18': {
        'input_tokens': 8,
        'output_tokens': 10,
        'total_tokens': 18,
        'input_token_details': {'audio': 0, 'cache_read': 0},
        'output_token_details': {'audio': 0, 'reasoning': 0}},
        'claude-3-5-haiku-20241022': {'input_tokens': 8,
        'output_tokens': 21,
        'total_tokens': 29,
        'input_token_details': {'cache_read': 0, 'cache_creation': 0}
    }
}

Invocation config

When invoking a model, you can pass additional configuration through the config parameter using a RunnableConfig dictionary. This provides run-time control over execution behavior, callbacks, and metadata tracking. Common configuration options include:
Invocation with config
response = model.invoke(
    "Tell me a joke",
    config={
        "run_name": "joke_generation",      # Custom name for this run
        "tags": ["humor", "demo"],          # Tags for categorization
        "metadata": {"user_id": "123"},     # Custom metadata
        "callbacks": [my_callback_handler], # Callback handlers
    }
)
These configuration values are particularly useful when:
  • Debugging with LangSmith tracing
  • Implementing custom logging or monitoring
  • Controlling resource usage in production
  • Tracking invocations across complex pipelines
For more information on all supported RunnableConfig attributes, see the RunnableConfig reference.

Configurable models

You can also create a runtime-configurable model by specifying configurable_fields. If you don’t specify a model value, then 'model' and 'model_provider' will be configurable by default.
from langchain.chat_models import init_chat_model

configurable_model = init_chat_model(temperature=0)

configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "gpt-5-nano"}},  # Run with GPT-5-Nano
)
configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "claude-3-5-sonnet-latest"}},  # Run with Claude
)