createReactAgent()
provides a production-ready ReAct (Reasoning + Acting) agent implementation based on the paper ReAct: Synergizing Reasoning and Acting in Language Models.
thought
-> action
-> observation
steps, where the model writes out its reasoning, picks a tool, sees the tool’s result, and then repeats. ReAct reduces hallucinations and makes the decision process auditable: the agent can form hypotheses (thought
), test them with tools (action
), and update its plan based on feedback (observation
).A ReAct loop runs until a stop condition - i.e. when the model emits a final answer or a max-iterations limit is reached.create_react_agent()
builds a graph-based agent runtime. A graph consists of nodes (steps) and edges (connections) that define how your agent processes information. The agent moves through this graph, executing nodes like the model node (which calls the model), the tools node (which executes tools), or pre/post model hook nodes. Learn more about the graph API.provider:model
(e.g. "openai:gpt-5"
) and support automatic inference (e.g. "gpt-5"
will be inferred as "openai:gpt-5"
).
The user may want more control over the model configuration, in which case they can initialize a model instance directly:
runtime
: The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent’s execution (e.g., user IDs, session details, or application-specific configuration).state
: The data that flows through your agent’s execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats).BaseChatModel
with the tools bound to it using .bind_tools(tools)
, where tools
is a subset of the tools
parameter.
tool
function)ToolNode
ToolNode
under the hood. This is the simplest way to set up a tool-calling agent:
ToolNode
ToolNode
directly and pass it to the agent. This allows you to customize the tool node’s behavior, such as handling tool errors:
ToolNode
, see ToolNode.search_products("wireless headphones")
check_inventory("WH-1000XM5")
prompt
parameter can be provided in several forms.
prompt
is provided, the agent will infer its task from the messages directly.
responseFormat
parameter.
messages
must be provided and will be used as an input to the agent
node (i.e., the node that calls the LLM). The rest of the keys will be added to the graph state.messages
in the pre-model hook, you should OVERWRITE the messages
key by doing the following:.invoke
to get a final response. If the agent executes multiple steps, this may take a while. To show intermediate progress, we can stream back messages as they occur.
store
to persist state across sessions, a postModelHook
to track tool usage, a custom stateSchema
to store additional state.
thread_id
to invoke a follow-up session where the agent maintains access to work performed in the previous session: