langgraph
:
TypedDict
, Pydantic
model, or dataclass. Below we will use TypedDict
. See this section for detail on using Pydantic.
By default, graphs will have the same input and output schema, and the state determines that schema. See this section for how to define distinct input and output schemas.
Let’s consider a simple example using messages. This represents a versatile formulation of state for many LLM applications. See our concepts page for more detail.
TypedDict
state schemas, we can define reducers by annotating the corresponding field of the state with a reducer function.
In the earlier example, our node updated the "messages"
key in the state by appending a message to it. Below, we add a reducer to this key, such that updates are automatically appended:
add_messages
that handles these considerations:
MessagesState
for convenience, so that we can have:
StateGraph
operates with a single schema, and all nodes are expected to communicate using that schema. However, it’s also possible to define distinct input and output schemas for a graph.
When distinct schemas are specified, an internal schema will still be used for communication between nodes. The input schema ensures that the provided input matches the expected structure, while the output schema filters the internal data to return only the relevant information according to the defined output schema.
Below, we’ll see how to define distinct input and output schema.
state_schema
argument on initialization that specifies the “shape” of the state that the nodes in the graph can access and update.
In our examples, we typically use a python-native TypedDict
or dataclass
for state_schema
, but state_schema
can be any type.
Here, we’ll see how a Pydantic BaseModel can be used for state_schema
to add run-time validation on inputs.
dataclass
instead.Serialization Behavior
Runtime Type Coercion
Working with Message Models
AnyMessage
(rather than BaseMessage
) for proper serialization/deserialization when using message objects over the wire.Extended example: specifying LLM at runtime
Extended example: specifying model and system message at runtime
retry_policy
parameter to the add_node. The retry_policy
parameter takes in a RetryPolicy
named tuple object. Below we instantiate a RetryPolicy
object with the default parameters and associate it with a node:
retry_on
parameter uses the default_retry_on
function, which retries on any exception except for the following:
ValueError
TypeError
ArithmeticError
ImportError
LookupError
NameError
SyntaxError
RuntimeError
ReferenceError
StopIteration
StopAsyncIteration
OSError
requests
and httpx
it only retries on 5xx status codes.
Extended example: customizing retry policies
cache_policy
parameter to the add_node function. In the following example, a CachePolicy
object is instantiated with a time to live of 120 seconds and the default key_func
generator. Then it is associated with a node:
cache
argument when compiling the graph. The example below uses InMemoryCache
to set up a graph with in-memory cache, but SqliteCache
is also available.
.add_node
and .add_edge
methods of our graph:
.add_sequence
:
Why split application steps into a sequence with LangGraph?
.add_node
:.add_edge
takes the names of nodes, which for functions defaults to node.__name__
.langgraph>=0.2.46
includes a built-in short-hand add_sequence
for adding node sequences. You can compile the same graph as follows:Node A
to B and C
and then fan in to D
. With our state, we specify the reducer add operation. This will combine or accumulate values for the specific key in the State, rather than simply overwriting the existing value. For lists, this means concatenating the new list with the existing list. See the above section on state reducers for more detail on updating state with reducers.
"b"
and "c"
are executed concurrently in the same superstep. Because they are in the same step, node "d"
executes after both "b"
and "c"
are finished.Importantly, updates from a parallel superstep may not be ordered consistently. If you need a consistent, predetermined ordering of updates from a parallel superstep, you should write the outputs to a separate field in the state together with a value with which to order them.Exception handling?
"b_2"
in the "b"
branch:
"b"
and "c"
are executed concurrently in the same superstep. We set defer=True
on node d
so it will not execute until all pending tasks are finished. In this case, this means that "d"
waits to execute until the entire "b"
branch is finished.
a
generates a state update that determines the following node.
"recursionLimit"
in the config. This will raise a GraphRecursionError
, which you can catch and handle:
"a"
is a tool-calling model, and node "b"
represents the tools.
In our route
conditional edge, we specify that we should end after the "aggregate"
list in the state passes a threshold length.
Invoking the graph, we see that we alternate between nodes "a"
and "b"
before terminating once we reach the termination condition.
GraphRecursionError
after a given number of supersteps. We can then catch and handle this exception:
Extended example: return state on hitting recursion limit
GraphRecursionError
, we can introduce a new key to the state that keeps track of the number of steps remaining until reaching the recursion limit. We can then use this key to determine if we should end the run.LangGraph implements a special RemainingSteps
annotation. Under the hood, it creates a ManagedValue
channel — a state channel that will exist for the duration of our graph run and no longer.Extended example: loops with branches
sync
implementation of the graph to an async
implementation, you will need to:
nodes
use async def
instead of def
.await
appropriately..ainvoke
or .astream
as desired.async
variants of all the sync
methods it’s typically fairly quick to upgrade a sync
graph to an async
graph.
See example below. To demonstrate async invocations of underlying LLMs, we will include a chat model:
Command
StateGraph
with the above nodes. Notice that the graph doesn’t have conditional edges for routing! This is because control flow is defined with Command
inside node_a
.
Command
as a return type annotation, e.g. Command[Literal["node_b", "node_c"]]
. This is necessary for the graph rendering and tells LangGraph that node_a
can navigate to node_b
and node_c
.graph=Command.PARENT
in Command
:
nodeA
in the above example into a single-node graph that we’ll add as a subgraph to our parent graph.
Command.PARENT
When you send updates from a subgraph node to a parent graph node for a key that’s shared by both parent and subgraph state schemas, you must define a reducer for the key you’re updating in the parent graph state. See the example below.Command(update={"my_custom_key": "foo", "messages": [...]})
from the tool:
messages
(or any state key used for the message history) in Command.update
when returning Command
from a tool and the list of messages in messages
MUST contain a ToolMessage
. This is necessary for the resulting message history to be valid (LLM providers require AI messages with tool calls to be followed by the tool result messages).Command
, we recommend using prebuilt ToolNode
which automatically handles tools returning Command
objects and propagates them to the graph state. If you’re writing a custom node that calls tools, you would need to manually propagate Command
objects returned by the tools as the update from the node.
.png
. Here we could use three options:
pip install pyppeteer
)pip install graphviz
)draw_mermaid_png()
uses Mermaid.Ink’s API to generate the diagram.