You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.

Overview

Tools are components that Agents call to perform actions. They extend model capabilities by letting them interact with the world through well-defined inputs and outputs.

Creating tools

Basic tool definition

The simplest way to create a tool is with the @tool decorator. By default, the function’s docstring becomes the tool’s description that helps the model understand when to use it:
from langchain_core.tools import tool

@tool
def search_database(query: str, limit: int = 10) -> str:
    """Search the customer database for records matching the query.

    Args:
        query: Search terms to look for
        limit: Maximum number of results to return
    """
    return f"Found {limit} results for '{query}'"
Type hints are required as they define the tool’s input schema. The docstring should be informative and concise to help the model understand the tool’s purpose.

Customizing tool properties

Custom tool name

By default, the tool name comes from the function name. Override it when you need something more descriptive:
@tool("web_search")  # Custom name
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

print(search.name)  # web_search

Custom tool description

Override the auto-generated tool description for clearer model guidance:
@tool("calculator", description="Performs arithmetic calculations. Use this for any math problems.")
def calc(expression: str) -> str:
    """Evaluate mathematical expressions."""
    return str(eval(expression))

Advanced schema definition

Define complex inputs with Pydantic models or JSON schemas:
from pydantic import BaseModel, Field
from typing import Literal

class WeatherInput(BaseModel):
    """Input for weather queries."""
    location: str = Field(description="City name or coordinates")
    units: Literal["celsius", "fahrenheit"] = Field(
        default="celsius",
        description="Temperature unit preference"
    )
    include_forecast: bool = Field(
        default=False,
        description="Include 5-day forecast"
    )

@tool(args_schema=WeatherInput)
def get_weather(location: str, units: str = "celsius", include_forecast: bool = False) -> str:
    """Get current weather and optional forecast."""
    temp = 22 if units == "celsius" else 72
    result = f"Current weather in {location}: {temp} degrees {units[0].upper()}"
    if include_forecast:
        result += "\nNext 5 days: Sunny"
    return result

Using tools with agents

Agents go beyond simple tool binding by adding reasoning loops, state management, and multi-step execution.
To see examples of how to use tools with agents, see Agents.

Advanced tool patterns

ToolNode

ToolNode is a prebuilt LangGraph component that handles tool calls within an agent’s workflow. It works seamlessly with create_agent(), offering advanced tool execution control, built in parallelism, and error handling.

Configuration options

ToolNode accepts the following parameters:
from langchain.agents import ToolNode

tool_node = ToolNode(
    tools=[...],              # List of tools or callables
    handle_tool_errors=True,  # Error handling configuration
    ...
)
tools
required
A list of tools that this node can execute. Can include:
  • LangChain @tool decorated functions
  • Callable objects (e.g. functions) with proper type hints and a docstring
handle_tool_errors
Controls how tool execution failures are handled. Can be:
  • bool
  • str
  • Callable[..., str]
  • type[Exception]
  • tuple[type[Exception], ...]
Default: internal _default_handle_tool_errors

Error handling strategies

ToolNode provides built-in error handling for tool execution through its handle_tool_errors property. To customize the error handling behavior, you can configure handle_tool_errors to either be a boolean, a string, a callable, an exception type, or a tuple of exception types:
  • True: Catch all errors and return a ToolMessage with the default error template containing the exception details.
  • str: Catch all errors and return a ToolMessage with this custom error message string.
  • type[Exception]: Only catch exceptions with the specified type and return the default error message for it.
  • tuple[type[Exception], ...]: Only catch exceptions with the specified types and return default error messages for them.
  • Callable[..., str]: Catch exceptions matching the callable’s signature and return the string result of calling it with the exception.
  • False: Disable error handling entirely, allowing exceptions to propagate.
handle_tool_errors defaults to a callable _default_handle_tool_errors that:
  • catches tool invocation errors ToolInvocationError (due to invalid arguments provided by the model) and returns a descriptive error message
  • ignores tool execution errors (they will be re-raised with the template string TOOL_CALL_ERROR_TEMPLATE = "Error: {error}\n Please fix your mistakes.")
Examples of how to use the different error handling strategies:
# Retry on all exception types with the default error message template string
tool_node = ToolNode(tools=[my_tool], handle_tool_errors=True)

# Retry on all exception types with a custom message string
tool_node = ToolNode(
    tools=[my_tool],
    handle_tool_errors="I encountered an issue. Please try rephrasing your request."
)

# Retry on ValueError with a custom message, otherwise raise
def handle_errors(e: ValueError) -> str:
    return "Invalid input provided"

tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)

# Retry on ValueError and KeyError with the default error message template string, otherwise raise
tool_node = ToolNode(
    tools=[my_tool],
    handle_tool_errors=(ValueError, KeyError)
)

Using with create_agent()

We recommend that you familiarize yourself with create_agent() before covering this section. Read more about agents.
Pass a configured ToolNode directly to create_agent():
from langchain_openai import ChatOpenAI
from langchain.agents import ToolNode, create_agent
import random

@tool
def fetch_user_data(user_id: str) -> str:
    """Fetch user data from database."""
    if random.random() > 0.7:
        raise ConnectionError("Database connection timeout")
    return f"User {user_id}: John Doe, john@example.com, Active"

@tool
def process_transaction(amount: float, user_id: str) -> str:
    """Process a financial transaction."""
    if amount > 10000:
        raise ValueError(f"Amount {amount} exceeds maximum limit of 10000")
    return f"Processed ${amount} for user {user_id}"

def handle_errors(e: Exception) -> str:
    if isinstance(e, ConnectionError):
        return "The database is currently overloaded, but it is safe to retry. Please try again with the same parameters."
    elif isinstance(e, ValueError):
        return f"Error: {e}. Try to process the transaction in smaller amounts."
    return f"Error: {e}. Please try again."

tool_node = ToolNode(
    tools=[fetch_user_data, process_transaction],
    handle_tool_errors=handle_errors
)

agent = create_agent(
    model=ChatOpenAI(model="gpt-4o"),
    tools=tool_node,
    prompt="You are a financial assistant."
)

agent.invoke({
    "messages": [{"role": "user", "content": "Process a payment of 15000 dollars for user123. Generate a receipt email and address it to the user."}]
})
When you pass a ToolNode to create_agent(), the agent uses your exact configuration including error handling, custom names, and tags. This is useful when you need fine-grained control over tool execution behavior.

State, context, and memory