Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
1.0 Alpha releases are available for the following packages:
langchain
langchain-core
langchain-anthropic
langchain-openai
New features
LangChain 1.0 introduces new features:-
A new
.content_blocks
property on message objects. This property provides a fully typed view of message content and standardizes modern LLM features across providers, including reasoning, citations, server-side tool calls, and more. There are no breaking changes associated with the new message content. Refer to the message content docs for more info. -
New prebuilt
langgraph
chains and agents inlangchain
. The surface area of thelangchain
package has been reduced to focus on popular and essential abstractions. A newlangchain-legacy
package is available for backward compatibility. Refer to the new agents docs and to the release notes for more detail.
Breaking changes
Dropped Python 3.9 support
Dropped Python 3.9 support
Python 3.9 is end of life in October 2025. Consequently, all LangChain packages now require Python 3.10 or higher.
Some legacy code moved to `langchain-legacy`
Some legacy code moved to `langchain-legacy`
The new After:
langchain
package features a reduced surface area that focuses on standard interfaces for LangChain components (e.g., init_chat_model
and init_embeddings
) as well as pre-built chains and agents backed by the langgraph
runtime.Existing functionality outside this focus, such as the indexing API and exports of langchain-community
features, have been moved to the langchain-legacy
package.To restore the previous behavior, update package installs of langchain
to langchain-legacy
, and replace imports:Before:Updated return type for chat models
Updated return type for chat models
The return type signature for chat model invocation has been fixed from After:
BaseMessage
to AIMessage
. Custom chat models implementing bind_tools
should update their return signature to avoid type checker errors:Before:Default message format for OpenAI Responses API
Default message format for OpenAI Responses API
When interacting with the Responses API,
langchain-openai
now defaults to storing response items in message content
. This behavior was previously opt-in by specifying output_version="responses/v1"
when instantiating ChatOpenAI
. This was done to resolve BadRequestError
that can arise in some multi-turn contexts.To restore previous behavior, set the LC_OUTPUT_VERSION
environment variable to v0
, or specify output_version="v0"
when instantiating ChatOpenAI
:Default `max_tokens` in `langchain-anthropic`
Default `max_tokens` in `langchain-anthropic`
The
max_tokens
parameter in ChatAnthropic
will now default to new values that are higher than the previous default of 1024
. The new default will vary based on the model chosen.Removal of deprecated objects
Removal of deprecated objects
Methods, functions, and other objects that were already deprecated and slated for removal in 1.0 have been deleted.
Deprecations
`.text()` is now a property
`.text()` is now a property
Use of the Existing usage patterns (i.e.,
.text()
method on message objects should be updated to drop the parentheses:.text()
) will continue to function but now emit a warning.Prebuilt agents
Thelangchain
release focuses on reducing LangChain’s surface area and narrowing in on popular and essential abstractions.
ReAct agent migration
create_react_agent
has moved from langgraph.prebuilts
to langchain.agents
with significant enhancements:
Enhanced structured output
create_agent
has improved coercion of outputs to structured data types:
- Main loop integration: Structured output is now generated in the main loop instead of requiring an additional LLM call
- Tool/output choice: Models can choose between calling tools, generating structured output, or both
- Cost reduction: Eliminates extra expense from additional LLM calls
-
Artificial tool calling (default for most models)
- LangChain generates tools matching your response format schema
- Model calls these tools, LangChain coerces args to desired format
- Configure with
ToolStrategy
hint
-
Provider implementations
- Uses native structured output support when available
- Configure with
ProviderStrategy
hint
Prompted output is no longer supported via the
response_format
argument.Error handling
Structured output errors Control error handling via thehandle_errors
arg to ToolStrategy
:
- Parsing errors: Model generates data that doesn’t match desired structure
- Multiple tool calls: Model generates 2+ tool calls for structured output schemas
- Invocation failure: Agent returns artificial
ToolMessage
asking model to retry (unchanged) - Execution failure: Agent now raises
ToolException
by default instead of retrying (prevents infinite loops)
handle_tool_errors
arg to ToolNode
.
Breaking changes
Pre-bound models To better support structured output,create_agent
no longer supports pre-bound models with tools or configuration:
Dynamic model functions can return pre-bound models if structured output is not used.
Reporting issues
Please report any issues discovered with 1.0 on GitHub using the'v1'
label.
See also
- Versioning - Understanding version numbers
- Release policy - Detailed release policies