Skip to main content
This guide outlines the major changes between LangChain v1 and previous versions.

Migrate to create_agent

In v1, the agent prebuilt is now in the langchain package. The table below outlines what functionality has changed:
SectionWhat changed
Import pathPackage moved from langgraph.prebuilt to langchain.agents
PromptsParameter renamed to system_prompt, dynamic prompts use middleware
Pre-model hookReplaced by middleware with before_model method
Post-model hookReplaced by middleware with after_model method
Custom stateDefined in middleware, TypedDict only
ModelDynamic selection via middleware, pre-bound models not supported
ToolsTool error handling moved to middleware with wrap_tool_call
Structured outputprompted output removed, use ToolStrategy/ProviderStrategy
Streaming node nameNode name changed from "agent" to "model"
Runtime contextDependency injection via context argument instead of config["configurable"]
NamespaceStreamlined to focus on agent building blocks, legacy code moved to langchain-classic

Import path

The import path for the agent prebuilt has changed from langgraph.prebuilt to langchain.agents. The name of the function has changed from create_react_agent to create_agent:
from langchain.agents import create_agent
For more information, see Agents.

Prompts

Static prompt rename

The prompt parameter has been renamed to system_prompt:
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather],
    system_prompt="You are a helpful assistant"
)

SystemMessage to string

If using SystemMessage objects in the system prompt, extract the string content:
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather],
    system_prompt="You are a helpful assistant"
)

Dynamic prompts

Dynamic prompts are a core context engineering pattern— they adapt what you tell the model based on the current conversation state. To do this, use the @dynamic_prompt decorator:
from dataclasses import dataclass

from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest
from langgraph.runtime import Runtime

@dataclass
class Context:  
    user_role: str = "user"

@dynamic_prompt
def dynamic_prompt(request: ModelRequest) -> str:  
    user_role = request.runtime.context.user_role
    base_prompt = "You are a helpful assistant."

    if user_role == "expert":
        prompt = (
            f"{base_prompt} Provide detailed technical responses."
        )
    elif user_role == "beginner":
        prompt = (
            f"{base_prompt} Explain concepts simply and avoid jargon."
        )
    else:
        prompt = base_prompt

    return prompt  

agent = create_agent(
    model="openai:gpt-4o",
    tools=tools,
    middleware=[dynamic_prompt],  
    context_schema=Context
)

# Use with context
agent.invoke(
    {"messages": [{"role": "user", "content": "Explain async programming"}]},
    context=Context(user_role="expert")
)

Pre-model hook

Pre-model hooks are now implemented as middleware with the before_model method. This new pattern is more extensible— you can define multiple middlewares to run before the model is called, reusing common patterns across different agents. Common use cases include:
  • Summarizing conversation history
  • Trimming messages
  • Input guardrails, like PII redaction
v1 now has summarization middleware built in:
from langchain.agents import create_agent
from langchain.agents.middleware import SummarizationMiddleware

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=tools,
    middleware=[
        SummarizationMiddleware(  
            model="anthropic:claude-sonnet-4-5-20250929",  
            max_tokens_before_summary=1000
        )  
    ]  
)

Post-model hook

Post-model hooks are now implemented as middleware with the after_model method. This new pattern is more extensible— you can define multiple middlewares to run after the model is called, reusing common patterns across different agents. Common use cases include:
  • Human in the loop
  • Output guardrails
v1 has a built in middleware for human in the loop approval for tool calls:
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[read_email, send_email],
    middleware=[HumanInTheLoopMiddleware(
        interrupt_on={
            "send_email": True,
            "description": "Please review this email before sending"
        },
    )]
)

Custom state

Custom state is now defined in middleware using the state_schema attribute:
from typing import Annotated
from langchain.tools import tool
from langchain.agents import create_agent  
from langchain.agents.middleware import AgentMiddleware, AgentState  
from langgraph.prebuilt import InjectedState

# Define custom state extending AgentState
class CustomState(AgentState):
    user_name: str

# Create middleware that manages custom state
class UserStateMiddleware(AgentMiddleware[CustomState]):  
    state_schema = CustomState  

@tool
def greet(
    state: Annotated[CustomState, InjectedState]
) -> str:
    """Use this to greet the user by name."""
    user_name = state.get("user_name", "Unknown")  
    return f"Hello {user_name}!"

agent = create_agent(  
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[greet],
    middleware=[UserStateMiddleware()]  
)
Custom state is defined by creating a class that extends AgentState and assigning it to the middleware’s state_schema attribute.

State type restrictions

create_agent now only supports TypedDict for state schemas. Pydantic models and dataclasses are no longer supported.
from langchain.agents import AgentState
from langchain.agents.middleware import AgentMiddleware

# AgentState is a TypedDict
class CustomAgentState(AgentState):  
    user_id: str

class CustomAgentMiddleware(AgentMiddleware[CustomAgentState]):
    state_schema = CustomAgentState

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=tools,
    middleware=[CustomAgentMiddleware()]
)
Simply inherit from langchain.agents.AgentState instead of BaseModel or decorating with dataclass. If you need to perform validation, handle it in middleware hooks instead.

Model

Dynamic model selection allows you to choose different models based on runtime context (e.g., task complexity, cost constraints, or user preferences). create_react_agent released in v0.6 of langgraph-prebuilt supported dynamic model and tool selection via a callable passed to the model parameter. This functionality has been ported to the middleware interface in v1.

Dynamic model selection

from langchain.agents import create_agent
from langchain.agents.middleware import (
    AgentMiddleware, ModelRequest, ModelRequestHandler
)
from langchain.messages import AIMessage
from langchain_openai import ChatOpenAI

basic_model = ChatOpenAI(model="gpt-5-nano")
advanced_model = ChatOpenAI(model="gpt-5")

class DynamicModelMiddleware(AgentMiddleware):

    def wrap_model_call(self, request: ModelRequest, handler: ModelRequestHandler) -> AIMessage:
        if len(request.state.messages) > self.messages_threshold:
            model = advanced_model
        else:
            model = basic_model

        return handler(request.replace(model=model))

    def __init__(self, messages_threshold: int) -> None:
        self.messages_threshold = messages_threshold

agent = create_agent(
    model=basic_model,
    tools=tools,
    middleware=[DynamicModelMiddleware(messages_threshold=10)]
)

Pre-bound models

To better support structured output, create_agent no longer accepts pre-bound models with tools or configuration:
# No longer supported
model_with_tools = ChatOpenAI().bind_tools([some_tool])
agent = create_agent(model_with_tools, tools=[])

# Use instead
agent = create_agent("openai:gpt-4o-mini", tools=[some_tool])
Dynamic model functions can return pre-bound models if structured output is not used.

Tools

The tools argument to create_agent accepts a list of:
  • LangChain BaseTool instances (functions decorated with @tool)
  • Callable objects (functions) with proper type hints and a docstring
  • dict that represents a built-in provider tools
It no longer accepts ToolNode instances.
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather, search_web]
)

Handling tool errors

You can now configure the handling of tool errors with middleware implementing the wrap_tool_call method.
# Example coming soon

Structured output

Node changes

Structured output used to be generated in a separate node from the main agent. This is no longer the case. We generate structured output in the main loop, reducing cost and latency.

Tool and provider strategies

In v1, there are two new structured output strategies:
  • ToolStrategy uses artificial tool calling to generate structured output
  • ProviderStrategy uses provider-native structured output generation
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy, ProviderStrategy
from pydantic import BaseModel

class OutputSchema(BaseModel):
    summary: str
    sentiment: str

# Using ToolStrategy
agent = create_agent(
    model="openai:gpt-4o-mini",
    tools=tools,
    # explicitly using tool strategy
    response_format=ToolStrategy(OutputSchema)  
)

Prompted output removed

Prompted output is no longer supported via the response_format argument. Compared to strategies like artificial tool calling and provider native structured output, prompted output has not proven to be particularly reliable.

Streaming node name rename

When streaming events from agents, the node name has changed from "agent" to "model" to better reflect the node’s purpose.

Runtime context

When you invoke an agent, it’s often the case that you want to pass two types of data:
  • Dynamic state that changes throughout the conversation (e.g., message history)
  • Static context that doesn’t change during the conversation (e.g., user metadata)
In v1, static context is supported by setting the context parameter to invoke and stream.
from dataclasses import dataclass

from langchain.agents import create_agent

@dataclass
class Context:
    user_id: str
    session_id: str

agent = create_agent(
    model=model,
    tools=tools,
    context_schema=ContextSchema  
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "Hello"}]},
    context=Context(user_id="123", session_id="abc")  
)
The old config["configurable"] pattern still works for backward compatibility, but using the new context parameter is recommended for new applications or applications migrating to v1.

Simplified package

The langchain package namespace has been significantly reduced in v1 to focus on essential building blocks for agents. The streamlined package makes it easier to discover and use the core functionality.

Namespace

ModuleWhat’s availableNotes
langchain.agentscreate_agent, AgentStateCore agent creation functionality
langchain.messagesMessage types, content blocks, trim_messagesRe-exported from langchain-core
langchain.toolstool, BaseTool, injection helpersRe-exported from langchain-core
langchain.chat_modelsinit_chat_model, BaseChatModelUnified model initialization
langchain.embeddingsEmbeddings, init_embeddings,Embedding models

langchain-classic

If you were using any of the following from the langchain package, you’ll need to install langchain-classic and update your imports:
  • Legacy chains (LLMChain, ConversationChain, etc.)
  • The indexing API
  • langchain-community re-exports
  • Other deprecated functionality
# For legacy chains
from langchain_classic.chains import LLMChain

# For indexing
from langchain_classic.indexes import ...
Installation:
uv pip install langchain-classic

Breaking changes

Dropped Python 3.9 support

All LangChain packages now require Python 3.10 or higher. Python 3.9 reaches end of life in October 2025.

Updated return type for chat models

The return type signature for chat model invocation has been fixed from BaseMessage to AIMessage. Custom chat models implementing bind_tools should update their return signature:
Runnable[LanguageModelInput, AIMessage]

Default message format for OpenAI Responses API

When interacting with the Responses API, langchain-openai now defaults to storing response items in message content. To restore previous behavior, set the LC_OUTPUT_VERSION environment variable to v0, or specify output_version="v0" when instantiating ChatOpenAI.
# Enforce previous behavior with output_version flag
model = ChatOpenAI(model="gpt-4o-mini", output_version="v0")

Default max_tokens in langchain-anthropic

The max_tokens parameter now defaults to higher values based on the model chosen, rather than the previous default of 1024. If you relied on the old default, explicitly set max_tokens=1024.

Legacy code moved to langchain-classic

Existing functionality outside the focus of standard interfaces and agents has been moved to the langchain-classic package. See the Simplified namespace section for details on what’s available in the core langchain package and what moved to langchain-classic.

Removal of deprecated APIs

Methods, functions, and other objects that were already deprecated and slated for removal in 1.0 have been deleted. Check the deprecation notices from previous versions for replacement APIs.

.text() is now a property

Use of the .text() method on message objects should drop the parentheses:
# Property access
text = response.text

# deprecated method call
text = response.text()
Existing usage patterns (i.e., .text()) will continue to function but now emit a warning.
I