Skip to main content
LangChain v1 is a focused, production-ready foundation for building agents. We’ve streamlined the framework around three core improvements: To upgrade,
npm install langchain @langchain/core
For a complete list of changes, see the migration guide.

createAgent

createAgent is the standard way to build agents in LangChain 1.0. It provides a simpler interface than the prebuilt createReactAgent exported from LangGraph while offering greater customization potential by using middleware.
import { createAgent } from "langchain";

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [getWeather],
  systemPrompt: "You are a helpful assistant.",
});

const result = await agent.invoke({
  messages: [
    { role: "user", content: "What is the weather in Tokyo?" },
  ],
});

console.log(result.content);
Under the hood, createAgent is built on the basic agent loop — calling a model, letting it choose tools to execute, and then finishing when it calls no more tools:
Core agent loop diagram
For more information, see Agents.

Middleware

Middleware is the defining feature of createAgent. It makes createAgent highly customizable, raising the ceiling for what you can build. Great agents require context engineering: getting the right information to the model at the right time. Middleware helps you control dynamic prompts, conversation summarization, selective tool access, state management, and guardrails through a composable abstraction.

Prebuilt middleware

LangChain provides a few prebuilt middlewares for common patterns, including:
  • summarizationMiddleware: Condense conversation history when it gets too long
  • humanInTheLoopMiddleware: Require approval for sensitive tool calls
  • piiRedactionMiddleware: Redact sensitive information before sending to the model
import {
  createAgent,
  summarizationMiddleware,
  humanInTheLoopMiddleware,
  piiRedactionMiddleware,
} from "langchain";

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [readEmail, sendEmail],
  middleware: [
    piiRedactionMiddleware({ patterns: ["email", "phone", "ssn"] }),
    summarizationMiddleware({
      model: "anthropic:claude-sonnet-4-5",
      maxTokensBeforeSummary: 500,
    }),
    humanInTheLoopMiddleware({
      interruptOn: {
        sendEmail: {
          allowedDecisions: ["approve", "edit", "reject"],
        },
      },
    }),
  ] as const,
});

Custom middleware

You can also build custom middleware to fit your specific needs. Build custom middleware by implementing any of these hooks using the createMiddleware function:
HookWhen it runsUse cases
beforeAgentBefore calling the agentLoad memory, validate input
beforeModelBefore each LLM callUpdate prompts, trim messages
wrapModelCallAround each LLM callIntercept and modify requests/responses
wrapToolCallAround each tool callIntercept and modify tool execution
afterModelAfter each LLM responseValidate output, apply guardrails
afterAgentAfter agent completesSave results, cleanup
Middleware flow diagram
Example custom middleware:
import { createMiddleware } from "langchain";

const contextSchema = z.object({
  userExpertise: z.enum(["beginner", "expert"]).default("beginner"),
})

const expertiseBasedToolMiddleware = createMiddleware({
  wrapModelCall: async (request, handler) => {
    const userLevel = request.runtime.context.userExpertise;
    if (userLevel === "expert") {
      const tools = [advancedSearch, dataAnalysis];
      return handler(
        request.replace("openai:gpt-5", tools)
      );
    }
    const tools = [simpleSearch, basicCalculator];
    return handler(
      request.replace("openai:gpt-5-nano", tools)
    );
  },
});

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [simpleSearch, advancedSearch, basicCalculator, dataAnalysis],
  middleware: [expertiseBasedToolMiddleware],
  contextSchema,
});
For more information, see the complete middleware guide.

Built on LangGraph

Because createAgent is built on LangGraph, you automatically get built in support for long running and reliable agents via:

Persistence

Conversations automatically persist across sessions with built-in checkpointing

Streaming

Stream tokens, tool calls, and reasoning traces in real-time

Human-in-the-loop

Pause agent execution for human approval before sensitive actions

Time travel

Rewind conversations to any point and explore alternate paths and prompts
You don’t need to learn LangGraph to use these features—they work out of the box.

Structured output

createAgent has improved structured output generation:
  • Main loop integration: Structured output is now generated in the main loop instead of requiring an additional LLM call
  • Structured output strategy: Models can choose between calling tools or using provider-side structured output generation
  • Cost reduction: Eliminates extra expense from additional LLM calls
import { createAgent } from "langchain";
import * as z from "zod";

const weatherSchema = z.object({
  temperature: z.number(),
  condition: z.string(),
});

const agent = createAgent({
  model: "openai:gpt-4o-mini",
  tools: [getWeather],
  responseFormat: weatherSchema,
});

const result = await agent.invoke({
  messages: [
    { role: "user", content: "What is the weather in Tokyo?" },
  ],
});

console.log(result.structuredResponse);
Error handling: Control error handling via the handleErrors parameter to ToolStrategy:
  • Parsing errors: Model generates data that doesn’t match desired structure
  • Multiple tool calls: Model generates 2+ tool calls for structured output schemas

Standard content blocks

1.0 releases are available for most packages. Only the following currently support new content blocks:
  • langchain
  • @langchain/core
  • @langchain/anthropic
  • @langchain/openai
Broader support for content blocks is planned.

Benefits

  • Provider agnostic: Access reasoning traces, citations, built-in tools (web search, code interpreters, etc.), and other features using the same API regardless of provider
  • Type safe: Full type hints for all content block types
  • Backward compatible: Standard content can be loaded lazily, so there are no associated breaking changes
For more information, see our guide on content blocks

Simplified package

LangChain v1 streamlines the langchain package namespace to focus on essential building blocks for agents. The package exposes only the most useful and relevant functionality: Most of these are re-exported from @langchain/core for convenience, which gives you a focused API surface for building agents.

@langchain/classic

Legacy functionality has moved to @langchain/classic to keep the core package lean and focused.

What’s in @langchain/classic

  • Legacy chains and chain implementations
  • Retrievers
  • The indexing API
  • @langchain/community exports
  • Other deprecated functionality
If you use any of this functionality, install @langchain/classic:
npm install @langchain/classic
Then update your imports:
import { ... } from "langchain"; 
import { ... } from "@langchain/classic"; 

import { ... } from "langchain/chains"; 
import { ... } from "@langchain/classic/chains"; 

Reporting issues

Please report any issues discovered with 1.0 on GitHub using the 'v1' label.

Additional resources

See also


Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I