Tools encapsulate a callable function and its input schema. These can be passed to compatible chat models, allowing the model to decide whether to invoke a tool and determine the appropriate arguments. You can define your own tools or use prebuilt tools

Define a tool

Define a basic tool with the tool function:
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

Run a tool

Tools conform to the Runnable interface, which means you can run a tool using the invoke method:
await multiply.invoke({ a: 6, b: 7 }); // returns 42
If the tool is invoked with type="tool_call", it will return a ToolMessage:
const toolCall = {
  type: "tool_call",
  id: "1",
  name: "multiply",
  args: { a: 42, b: 7 },
};
await multiply.invoke(toolCall); // returns a ToolMessage object
Output:
ToolMessage {
  content: "294",
  name: "multiply",
  tool_call_id: "1"
}

Use in an agent

To create a tool-calling agent, you can use the prebuilt createReactAgent:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
// highlight-next-line
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

// highlight-next-line
const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [multiply],
});

await agent.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});

Use in a workflow

If you are writing a custom workflow, you will need to:
  1. register the tools with the chat model
  2. call the tool if the model decides to use it
Use model.bindTools() to register the tools with the model.
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

// highlight-next-line
const modelWithTools = model.bindTools([multiply]);
LLMs automatically determine if a tool invocation is necessary and handle calling the tool with the appropriate arguments.

ToolNode

To execute tools in custom workflows, use the prebuilt ToolNode or implement your own custom node. ToolNode is a specialized node for executing tools in a workflow. It provides the following features:
  • Supports both synchronous and asynchronous tools.
  • Executes multiple tools concurrently.
  • Handles errors during tool execution (handleToolErrors: true, enabled by default). See handling tool errors for more details.
  • Input: MessagesZodState, where the last message is an AIMessage containing the tool_calls parameter.
  • Output: MessagesZodState updated with the resulting ToolMessage from executed tools.
// highlight-next-line
import { ToolNode } from "@langchain/langgraph/prebuilt";

const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

const getCoolestCities = tool(
  () => {
    return "nyc, sf";
  },
  {
    name: "get_coolest_cities",
    description: "Get a list of coolest cities",
    schema: z.object({
      noOp: z.string().optional().describe("No-op parameter."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather, getCoolestCities]);
await toolNode.invoke({ messages: [...] });

Tool customization

For more control over tool behavior, use the @tool decorator.

Parameter descriptions

Auto-generate descriptions from schema:
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

Explicit input schema

Tool name

Override the default tool name using the first argument or name property:
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply_tool", // Custom name
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

Context management

Tools within LangGraph sometimes require context data, such as runtime-only arguments (e.g., user IDs or session details), that should not be controlled by the model. LangGraph provides three methods for managing such context:
TypeUsage ScenarioMutableLifetime
ConfigurationStatic, immutable runtime dataSingle invocation
Short-term memoryDynamic, changing data during invocationSingle invocation
Long-term memoryPersistent, cross-session dataAcross multiple sessions

Configuration

Use configuration when you have immutable runtime data that tools require, such as user identifiers. You pass these arguments via LangGraphRunnableConfig at invocation and access them in the tool:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserInfo = tool(
  // highlight-next-line
  async (_, config: LangGraphRunnableConfig) => {
    const userId = config?.configurable?.user_id;
    return userId === "user_123" ? "User is John Smith" : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Retrieve user information based on user ID.",
    schema: z.object({}),
  }
);

// Invocation example with an agent
await agent.invoke(
  { messages: [{ role: "user", content: "look up user info" }] },
  // highlight-next-line
  { configurable: { user_id: "user_123" } }
);

Short-term memory

Short-term memory maintains dynamic state that changes during a single execution. To access (read) the graph state inside the tools, you can use the getContextVariable function:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { getContextVariable } from "@langchain/core/context";
import { MessagesZodState } from "@langchain/langgraph";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserName = tool(
  // highlight-next-line
  async (_, config: LangGraphRunnableConfig) => {
    // highlight-next-line
    const currentState = getContextVariable("currentState") as z.infer<
      typeof MessagesZodState
    > & { userName?: string };
    return currentState?.userName || "Unknown user";
  },
  {
    name: "get_user_name",
    description: "Retrieve the current user name from state.",
    schema: z.object({}),
  }
);
To update short-term memory, you can use tools that return a Command to update state:
import { Command } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const updateUserName = tool(
  async (input) => {
    // highlight-next-line
    return new Command({
      // highlight-next-line
      update: {
        // highlight-next-line
        userName: input.newName,
        // highlight-next-line
        messages: [
          // highlight-next-line
          {
            // highlight-next-line
            role: "assistant",
            // highlight-next-line
            content: `Updated user name to ${input.newName}`,
            // highlight-next-line
          },
          // highlight-next-line
        ],
        // highlight-next-line
      },
      // highlight-next-line
    });
  },
  {
    name: "update_user_name",
    description: "Update user name in short-term memory.",
    schema: z.object({
      newName: z.string().describe("The new user name"),
    }),
  }
);
If you want to use tools that return Command and update graph state, you can either use prebuilt createReactAgent / ToolNode components, or implement your own tool-executing node that collects Command objects returned by the tools and returns a list of them, e.g.:
const callTools = async (state: State) => {
  // ...
  const commands = await Promise.all(
    toolCalls.map(toolCall => toolsByName[toolCall.name].invoke(toolCall))
  );
  return commands;
};

Long-term memory

Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information. To use long-term memory, you need to:
  1. Configure a store to persist data across invocations.
  2. Access the store from within tools.
To access information in the store:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserInfo = tool(
  async (_, config: LangGraphRunnableConfig) => {
    // Same as that provided to `builder.compile({ store })`
    // or `createReactAgent`
    // highlight-next-line
    const store = config.store;
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    const userInfo = await store.get(["users"], userId);
    return userInfo?.value ? JSON.stringify(userInfo.value) : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Look up user info.",
    schema: z.object({}),
  }
);
To update information in the store:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const saveUserInfo = tool(
  async (input, config: LangGraphRunnableConfig) => {
    // Same as that provided to `builder.compile({ store })`
    // or `createReactAgent`
    // highlight-next-line
    const store = config.store;
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    await store.put(["users"], userId, input.userInfo);
    return "Successfully saved user info.";
  },
  {
    name: "save_user_info",
    description: "Save user info.",
    schema: z.object({
      userInfo: z.string().describe("User information to save"),
    }),
  }
);

Advanced tool features

Immediate return

Use returnDirect: true to immediately return a tool’s result without executing additional logic. This is useful for tools that should not trigger further processing or tool calls, allowing you to return results directly to the user.
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const add = tool(
  (input) => {
    return input.a + input.b;
  },
  {
    name: "add",
    description: "Add two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
    // highlight-next-line
    returnDirect: true,
  }
);
Using without prebuilt componentsIf you are building a custom workflow and are not relying on createReactAgent or ToolNode, you will also need to implement the control flow to handle returnDirect: true.

Force tool use

If you need to force a specific tool to be used, you will need to configure this at the model level using the tool_choice parameter in the bind_tools method. Force specific tool usage via tool_choice:
const greet = tool(
  (input) => {
    return `Hello ${input.userName}!`;
  },
  {
    name: "greet",
    description: "Greet user.",
    schema: z.object({
      userName: z.string(),
    }),
    returnDirect: true,
  }
);

const tools = [greet];

const configuredModel = model.bindTools(
  tools,
  // Force the use of the 'greet' tool
  // highlight-next-line
  { tool_choice: { type: "tool", name: "greet" } }
);
Avoid infinite loopsForcing tool usage without stopping conditions can create infinite loops. Use one of the following safeguards:
Tool choice configuration The tool_choice parameter is used to configure which tool should be used by the model when it decides to call a tool. This is useful when you want to ensure that a specific tool is always called for a particular task or when you want to override the model’s default behavior of choosing a tool based on its internal logic.Note that not all models support this feature, and the exact configuration may vary depending on the model you are using.

Disable parallel calls

For supported providers, you can disable parallel tool calling by setting parallel_tool_calls: false via the model.bindTools() method:
model.bindTools(
  tools,
  // highlight-next-line
  { parallel_tool_calls: false }
);

Handle errors

LangGraph provides built-in error handling for tool execution through the prebuilt ToolNode component, used both independently and in prebuilt agents. By default, ToolNode catches exceptions raised during tool execution and returns them as ToolMessage objects with a status indicating an error.
import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const multiply = tool(
  (input) => {
    if (input.a === 42) {
      throw new Error("The ultimate error");
    }
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

// Default error handling (enabled by default)
const toolNode = new ToolNode([multiply]);

const message = new AIMessage({
  content: "",
  tool_calls: [
    {
      name: "multiply",
      args: { a: 42, b: 7 },
      id: "tool_call_id",
      type: "tool_call",
    },
  ],
});

const result = await toolNode.invoke({ messages: [message] });
Output:
{ messages: [
  ToolMessage {
    content: "Error: The ultimate error\n Please fix your mistakes.",
    name: "multiply",
    tool_call_id: "tool_call_id",
    status: "error"
  }
]}

Disable error handling

To propagate exceptions directly, disable error handling:
const toolNode = new ToolNode([multiply], { handleToolErrors: false });
With error handling disabled, exceptions raised by tools will propagate up, requiring explicit management.

Custom error messages

Provide a custom error message by setting the error handling parameter to a string:
const toolNode = new ToolNode([multiply], {
  handleToolErrors:
    "Can't use 42 as the first operand, please switch operands!",
});
Example output:
{ messages: [
  ToolMessage {
    content: "Can't use 42 as the first operand, please switch operands!",
    name: "multiply",
    tool_call_id: "tool_call_id",
    status: "error"
  }
]}

Error handling in agents

Error handling in prebuilt agents (createReactAgent) leverages ToolNode:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatAnthropic } from "@langchain/anthropic";

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [multiply],
});

// Default error handling
await agent.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});
To disable or customize error handling in prebuilt agents, explicitly pass a configured ToolNode:
const customToolNode = new ToolNode([multiply], {
  handleToolErrors: "Cannot use 42 as a first operand!",
});

const agentCustom = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: customToolNode,
});

await agentCustom.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});

Handle large numbers of tools

As the number of available tools grows, you may want to limit the scope of the LLM’s selection, to decrease token consumption and to help manage sources of error in LLM reasoning. To address this, you can dynamically adjust the tools available to a model by retrieving relevant tools at runtime using semantic search. See langgraph-bigtool prebuilt library for a ready-to-use implementation.

Prebuilt tools

LLM provider tools

You can use prebuilt tools from model providers by passing a dictionary with tool specs to the tools parameter of createReactAgent. For example, to use the web_search_preview tool from OpenAI:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o-mini" }),
  tools: [{ type: "web_search_preview" }],
});

const response = await agent.invoke({
  messages: [
    { role: "user", content: "What was a positive news story from today?" },
  ],
});
Please consult the documentation for the specific model you are using to see which tools are available and how to use them.

LangChain tools

Additionally, LangChain supports a wide range of prebuilt tool integrations for interacting with APIs, databases, file systems, web data, and more. These tools extend the functionality of agents and enable rapid development. You can browse the full list of available integrations in the LangChain integrations directory. Some commonly used tool categories include:
  • Search: Tavily, SerpAPI
  • Code interpreters: Web browsers, calculators
  • Databases: SQL, vector databases
  • Web data: Web scraping and browsing
  • APIs: Various API integrations
These integrations can be configured and added to your agents using the same tools parameter shown in the examples above.