Tools extend what agents can do—letting them fetch real-time data, execute code, query external databases, and take actions in the world. Under the hood, tools are callable functions with well-defined inputs and outputs that get passed to a chat model. The model decides when to invoke a tool based on the conversation context, and what input arguments to provide.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Create tools
Basic tool definition
The simplest way to create a tool is with the@tool decorator. By default, the function’s docstring becomes the tool’s description that helps the model understand when to use it:
Server-side tool use: Some chat models feature built-in tools (web search, code interpreters) that are executed server-side. See Server-side tool use for details.
Customize tool properties
Custom tool name
By default, the tool name comes from the function name. Override it when you need something more descriptive:Custom tool description
Override the auto-generated tool description for clearer model guidance:Advanced schema definition
Define complex inputs with Pydantic models or JSON schemas:Reserved argument names
The following parameter names are reserved and cannot be used as tool arguments. Using these names will cause runtime errors.| Parameter name | Purpose |
|---|---|
config | Reserved for passing RunnableConfig to tools internally |
runtime | Reserved for ToolRuntime parameter (accessing state, context, store) |
ToolRuntime parameter instead of naming your own arguments config or runtime.
Access context
Tools are most powerful when they can access runtime information like conversation history, user data, and persistent memory. This section covers how to access and update this information from within your tools. Tools can access runtime information through theToolRuntime parameter, which provides:
| Component | Description | Use case |
|---|---|---|
| State | Short-term memory - mutable data that exists for the current conversation (messages, counters, custom fields) | Access conversation history, track tool call counts |
| Context | Immutable configuration passed at invocation time (user IDs, session info) | Personalize responses based on user identity |
| Store | Long-term memory - persistent data that survives across conversations | Save user preferences, maintain knowledge base |
| Stream Writer | Emit real-time updates during tool execution | Show progress for long-running operations |
| Execution Info | Identity and retry information for the current execution (thread ID, run ID, attempt number) | Access thread/run IDs, adjust behavior based on retry state |
| Server Info | Server-specific metadata when running on LangGraph Server (assistant ID, graph ID, authenticated user) | Access assistant ID, graph ID, or authenticated user info |
| Config | RunnableConfig for the execution | Access callbacks, tags, and metadata |
| Tool Call ID | Unique identifier for the current tool invocation | Correlate tool calls for logs and model invocations |
Short-term memory (State)
State represents short-term memory that exists for the duration of a conversation. It includes the message history and any custom fields you define in your graph state.Add
runtime: ToolRuntime to your tool signature to access state. This parameter is automatically injected and hidden from the LLM - it won’t appear in the tool’s schema.Access state
Tools can access the current conversation state usingruntime.state:
Update state
UseCommand to update the agent’s state. This is useful for tools that need to update custom state fields.
Include a ToolMessage in the update so the model can see the result of the tool call:
Context
Context provides immutable configuration data that is passed at invocation time. Use it for user IDs, session details, or application-specific settings that shouldn’t change during a conversation. Access context throughruntime.context:
Long-term memory (Store)
TheBaseStore provides persistent storage that survives across conversations. Unlike state (short-term memory), data saved to the store remains available in future sessions.
Access the store through runtime.store. The store uses a namespace/key pattern to organize data:
Stream writer
Stream real-time updates from tools during execution. This is useful for providing progress feedback to users during long-running operations. Useruntime.stream_writer to emit custom updates:
If you use
runtime.stream_writer inside your tool, the tool must be invoked within a LangGraph execution context. See Streaming for more details.Execution info
Access thread ID, run ID, and retry state from within a tool viaruntime.execution_info:
Requires
deepagents>=0.5.0 (or langgraph>=1.1.5).Server info
When your tool runs on LangGraph Server, access the assistant ID, graph ID, and authenticated user viaruntime.server_info:
server_info is None when the tool is not running on LangGraph Server (e.g., during local development or testing).
Requires
deepagents>=0.5.0 (or langgraph>=1.1.5).Tool execution
In LangChain, tools are used by agents (for example viacreate_agent) and tool error handling is configured through middleware.
For LangGraph workflows, tool execution is handled by ToolNode. See ToolNode.
Tool return values
You can choose different return values for your tools:- Return a
stringfor human-readable results. - Return an
objectfor structured results the model should parse. - Return a
Commandwith optional message when you need to write to state.
Return a string
Return a string when the tool should provide plain text for the model to read and use in its next response.- The return value is converted to a
ToolMessage. - The model sees that text and decides what to do next.
- No agent state fields are changed unless the model or another tool does so later.
Return an object
Return an object (for example, adict) when your tool produces structured data that the model should inspect.
- The object is serialized and sent back as tool output.
- The model can read specific fields and reason over them.
- Like string returns, this does not directly update graph state.
Return a Command
Return aCommand when the tool needs to update graph state (for example, setting user preferences or app state).
You can return a Command with or without including a ToolMessage.
If the model needs to see that the tool succeeded (for example, to confirm a preference change), include a ToolMessage in the update, using runtime.tool_call_id for the tool_call_id parameter.
- The command updates state using
update. - Updated state is available to subsequent steps in the same run.
- Use reducers for fields that may be updated by parallel tool calls.
Error handling
Handle tool errors using LangChain agent middleware to retry failed tool calls or return custom error messages:State injection
Tools can access the current graph state throughToolRuntime:
Prebuilt tools
LangChain provides a large collection of prebuilt tools and toolkits for common tasks like web search, code interpretation, database access, and more. These ready-to-use tools can be directly integrated into your agents without writing custom code. See the tools and toolkits integration page for a complete list of available tools organized by category.Server-side tool use
Some chat models feature built-in tools that are executed server-side by the model provider. These include capabilities like web search and code interpreters that don’t require you to define or host the tool logic. Refer to the individual chat model integration pages and the tool calling documentation for details on enabling and using these built-in tools.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

