createAgent()
provides a production-ready ReAct (Reasoning + Acting) agent implementation based on the paper ReAct: Synergizing Reasoning and Acting in Language Models.
thought
-> action
-> observation
steps, where the model writes out its reasoning, picks a tool, sees the tool’s result, and then repeats. ReAct reduces hallucinations and makes the decision process auditable: the agent can form hypotheses (thought
), test them with tools (action
), and update its plan based on feedback (observation
).A ReAct loop runs until a stop condition - i.e. when the model emits a final answer or a max-iterations limit is reached.create_agent()
builds a graph-based agent runtime using LangGraph. A graph consists of nodes (steps) and edges (connections) that define how your agent processes information. The agent moves through this graph, executing nodes like the model node (which calls the model), the tools node (which executes tools), or pre/post model hook nodes. Learn more about the graph API.provider:model
(e.g. "openai:gpt-5"
). You may want more control over the model configuration, in which case you can initialize a model instance directly using the provider package:
state
: The data that flows through your agent’s execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats).BaseChatModel
with the tools bound to it using .bindTools(tools)
, where tools
is a subset of the tools
parameter.
tool
function, or object that represents a builtin provider tool)ToolNode
ToolNode
under the hood. This is the simplest way to set up a tool-calling agent:
ToolNode
directly and pass it to the agent. This allows you to customize the tool node’s behavior, such as handling tool errors:
ToolNode
, see ToolNode.search_products("wireless headphones")
check_inventory("WH-1000XM5")
prompt
parameter can be provided in several forms.
prompt
is provided, the agent will infer its task from the messages directly.
responseFormat
parameter.
messages
must be provided and will be used as an input to the agent
node (i.e., the node that calls the LLM). The rest of the keys will be added to the graph state.messages
in the pre-model hook, you should OVERWRITE the messages
key by doing the following:.invoke
to get a final response. If the agent executes multiple steps, this may take a while. To show intermediate progress, we can stream back messages as they occur.