You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.
1.0 Alpha releases are available for the following packages:
  • langchain
  • @langchain/core
  • @langchain/anthropic
  • @langchain/openai
Broader support will be rolled out during the alpha period.

New features

Core exports from langchain

The langchain package now exports key primitives like tool, message types, ToolNode, createAgent, and more directly from the root package.
import { tool, HumanMessage, createAgent } from "langchain";

createAgent in core langchain

The React-style agent is now part of the core langchain project. Import directly from langchain:
import { createAgent, HumanMessage, tool } from "langchain";
import { z } from "zod";

const getWeather = tool(async ({ city }) => `Sunny in ${city}`, {
  name: "getWeather",
  description: "Get current weather by city",
  schema: z.object({ city: z.string() }),
});

const agent = await createAgent({
  // New: pass a model by name
  model: "openai:gpt-4o-mini",
  tools: [getWeather],
  responseFormat: z.object({ answer: z.string() }),
});

const res = await agent.invoke({
  messages: [new HumanMessage("Weather in SF?")],
});
console.log(res.structuredResponse.answer);
You can now pass a “model as string” option to define the model to use. This requires you to have the specific model provider package installed (e.g. @langchain/openai for openai:gpt-4o-mini).

ToolNode exported from langchain

Build agent graphs that execute tools as a node. This makes tool execution composable within graph workflows.
import { StateGraph } from "@langchain/langgraph";
import { ToolNode, tool } from "langchain";

const tools = [tool(async ({ query }) => `Results for: ${query}`, {
  name: "search", schema: z.object({ query: z.string() })
})];

const graph = new StateGraph({ channels: { messages: [] } })
  .addNode("tools", new ToolNode(tools))
  .addEdge("__start__", "tools");

const result = await graph.compile().invoke({
  messages: [/* tool call messages */]
});

Default tool-error handling in agents

When handleToolErrors is true (default), tool exceptions are caught and converted into a ToolMessage so the agent can recover. Set it to false to surface raw errors for strict workflows.
import { ToolNode, ToolMessage, tool } from "langchain";

const tools = [
  tool(/* ... */),
  // ...
];

// default: handleToolErrors: true → returns a ToolMessage with error text
const forgiving = new ToolNode(tools);

// strict: throw on tool error
const strict = new ToolNode(tools, { handleToolErrors: false });

// dynamic: custom error handling
const dynamic = new ToolNode(tools, {
  handleToolErrors: (error, toolCall) => {
    if (error instanceof Error && error.message.includes("Fetch Failed")) {
      return new ToolMessage({
        content: "Fetch Failed. Please try again.",
        tool_call_id: toolCall.id!,
      });
    }

    throw error;
  },
});

Standard typed message content

@langchain/core features standard, typed message content. This includes standard types for reasoning, citations, server-side tool calls, and other modern LLM features. There are no breaking changes associated with existing message content. The standard content can be lazily-parsed off of existing v0 messages using the contentBlocks property:
import { AIMessage } from "@langchain/core";

new AIMessage("Hello, world").contentBlocks

Breaking changes

Migration guide

Follow these steps to migrate your JavaScript/TypeScript code to LangChain v1.0:

1. Node version

Set engines.node to >=20 and update CI runners:
// package.json
{ "engines": { "node": ">=20" } }

2. Import path updates

All langchain/schema/* exports removed:
- import { PromptTemplate } from "langchain/schema/prompt_template";
+ import { PromptTemplate } from "langchain/prompts";

- import type { AttributeInfo } from "langchain/schema/query_constructor";
+ import type { AttributeInfo } from "langchain/chains/query_constructor";
Remove unsupported imports:
- import { ... } from "langchain/runnables/remote";  // No longer exported
- import { ... } from "langchain/smith";             // Use separate langsmith package
- import { ... } from "langchain/callbacks";         // Use LCEL observability instead
- import { ... } from "langchain/agents";            // Use createAgent instead
Azure OpenAI package removed:
- import { AzureChatOpenAI } from "@langchain/azure-openai";
+ import { AzureChatOpenAI } from "@langchain/openai";
// Configure with Azure endpoints in ChatOpenAI constructor

3. Agent migration

Replace legacy agent imports with createAgent:
- import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
- import { createReactAgent } from "@langchain/langgraph/prebuilt";
+ import { createAgent } from "langchain";
Migrate from legacy agent patterns:
// Old pattern
- const agent = createOpenAIFunctionsAgent({ llm, tools, prompt });
- const agentExecutor = new AgentExecutor({ agent, tools });
- const result = await agentExecutor.invoke({ input: "Hello" });

// New pattern
+ const agent = createAgent({
+   model: "openai:gpt-4o-mini",
+   tools,
+   responseFormat: z.object({ answer: z.string() })
+ });
+ const result = await agent.invoke({
+   messages: [new HumanMessage("Hello")]
+ });
You can now pass model as a model name string to createAgent. This requires you to have the specific model provider package installed (e.g. @langchain/openai for openai:gpt-4o-mini). Use ToolNode to encapsulate tool execution in graphs:
import { ToolNode } from "langchain";

const toolNode = new ToolNode([/* tools */]);

4. Error handling configuration

Decide on error policy for ToolNode:
// Default: soft-handling (converts errors to ToolMessage)
const forgiving = new ToolNode([/* tools */], { handleToolErrors: true });

// Strict: throw on tool error
const strict = new ToolNode([/* tools */], { handleToolErrors: false });

Reporting issues

Please report any issues discovered with 1.0 on GitHub using the 'v1' label.

See also