Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the current LangGraph Python or LangGraph JavaScript docs.
To review, edit, and approve tool calls in an agent or workflow, use interrupts to pause a graph and wait for human input. Interrupts use LangGraph’s persistence layer, which saves the graph state, to indefinitely pause graph execution until you resume.
Dynamic interrupts (also known as dynamic breakpoints) are triggered based on the current state of the graph. You can set dynamic interrupts by calling interrupt function in the appropriate place. The graph will pause, which allows for human intervention, and then resumes the graph with their input. It’s useful for tasks like approvals, edits, or gathering additional context.
As of v1.0, interrupt is the recommended way to pause a graph. NodeInterrupt is deprecated and will be removed in v2.0.
from langgraph.types import interrupt, Commanddef human_node(state: State): value = interrupt( # (1)! { "text_to_revise": state["some_text"] # (2)! } ) return { "some_text": value # (3)! }graph = graph_builder.compile(checkpointer=checkpointer) # (4)!# Run the graph until the interrupt is hit.config = {"configurable": {"thread_id": "some_id"}}result = graph.invoke({"some_text": "original text"}, config=config) # (5)!print(result['__interrupt__']) # (6)!# > [# > Interrupt(# > value={'text_to_revise': 'original text'},# > resumable=True,# > ns=['human_node:6ce9e64f-edef-fe5d-f7dc-511fa9526960']# > )# > ]print(graph.invoke(Command(resume="Edited text"), config=config)) # (7)!# > {'some_text': 'Edited text'}
interrupt(...) pauses execution at human_node, surfacing the given payload to a human.
Any JSON serializable value can be passed to the interrupt function. Here, a dict containing the text to revise.
Once resumed, the return value of interrupt(...) is the human-provided input, which is used to update the state.
A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an Interrupt object with the payload and metadata.
The graph is resumed with a Command(resume=...), injecting the human’s input and continuing execution.
Extended example: using `interrupt`
Copy
from typing import TypedDictimport uuidfrom langgraph.checkpoint.memory import InMemorySaverfrom langgraph.constants import STARTfrom langgraph.graph import StateGraphfrom langgraph.types import interrupt, Commandclass State(TypedDict): some_text: strdef human_node(state: State): value = interrupt( # (1)! { "text_to_revise": state["some_text"] # (2)! } ) return { "some_text": value # (3)! }# Build the graphgraph_builder = StateGraph(State)graph_builder.add_node("human_node", human_node)graph_builder.add_edge(START, "human_node")checkpointer = InMemorySaver() # (4)!graph = graph_builder.compile(checkpointer=checkpointer)# Pass a thread ID to the graph to run it.config = {"configurable": {"thread_id": uuid.uuid4()}}# Run the graph until the interrupt is hit.result = graph.invoke({"some_text": "original text"}, config=config) # (5)!print(result['__interrupt__']) # (6)!# > [# > Interrupt(# > value={'text_to_revise': 'original text'},# > resumable=True,# > ns=['human_node:6ce9e64f-edef-fe5d-f7dc-511fa9526960']# > )# > ]print(result["__interrupt__"]) # (6)!# > [Interrupt(value={'text_to_revise': 'original text'}, id='6d7c4048049254c83195429a3659661d')]print(graph.invoke(Command(resume="Edited text"), config=config)) # (7)!# > {'some_text': 'Edited text'}
interrupt(...) pauses execution at human_node, surfacing the given payload to a human.
Any JSON serializable value can be passed to the interrupt function. Here, a dict containing the text to revise.
Once resumed, the return value of interrupt(...) is the human-provided input, which is used to update the state.
A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an Interrupt object with the payload and metadata.
The graph is resumed with a Command(resume=...), injecting the human’s input and continuing execution.
Interrupts resemble Python’s input() function in terms of developer experience, but they do not automatically resume execution from the interruption point. Instead, they rerun the entire node where the interrupt was used. For this reason, interrupts are typically best placed at the start of a node or in a dedicated node.
Resuming from an interrupt is different from Python’s input() function, where execution resumes from the exact point where the input() function was called.
When the interrupt function is used within a graph, execution pauses at that point and awaits user input.To resume execution, use the Command primitive, which can be supplied via the invoke or stream methods. The graph resumes execution from the beginning of the node where interrupt(...) was initially called. This time, the interrupt function will return the value provided in Command(resume=value) rather than pausing again. All code from the beginning of the node to the interrupt will be re-executed.
Copy
# Resume graph execution by providing the user's input.graph.invoke(Command(resume={"age": "25"}), thread_config)
When nodes with interrupt conditions are run in parallel, it’s possible to have multiple interrupts in the task queue.
For example, the following graph has two nodes run in parallel that require human input:Once your graph has been interrupted and is stalled, you can resume all the interrupts at once with Command.resume, passing a dictionary mapping of interrupt ids to resume values.
Copy
from typing import TypedDictimport uuidfrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import InMemorySaverfrom langgraph.constants import STARTfrom langgraph.graph import StateGraphfrom langgraph.types import interrupt, Commandclass State(TypedDict): text_1: str text_2: strdef human_node_1(state: State): value = interrupt({"text_to_revise": state["text_1"]}) return {"text_1": value}def human_node_2(state: State): value = interrupt({"text_to_revise": state["text_2"]}) return {"text_2": value}graph_builder = StateGraph(State)graph_builder.add_node("human_node_1", human_node_1)graph_builder.add_node("human_node_2", human_node_2)# Add both nodes in parallel from STARTgraph_builder.add_edge(START, "human_node_1")graph_builder.add_edge(START, "human_node_2")checkpointer = InMemorySaver()graph = graph_builder.compile(checkpointer=checkpointer)thread_id = str(uuid.uuid4())config: RunnableConfig = {"configurable": {"thread_id": thread_id}}result = graph.invoke( {"text_1": "original text 1", "text_2": "original text 2"}, config=config)# Resume with mapping of interrupt IDs to valuesresume_map = { i.id: f"edited text for {i.value['text_to_revise']}" for i in graph.get_state(config).interrupts}print(graph.invoke(Command(resume=resume_map), config=config))# > {'text_1': 'edited text for original text 1', 'text_2': 'edited text for original text 2'}
There are four typical design patterns that you can implement using interrupt and Command:
Approve or reject: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involves routing the graph based on the human’s input.
Edit graph state: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves updating the state with the human’s input.
Review tool calls: Pause the graph to review and edit tool calls requested by the LLM before tool execution.
Validate human input: Pause the graph to validate human input before proceeding with the next step.
Below we show different design patterns that can be implemented using interrupt and Command.
Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
Copy
from typing import Literalfrom langgraph.types import interrupt, Commanddef human_approval(state: State) -> Command[Literal["some_node", "another_node"]]: is_approved = interrupt( { "question": "Is this correct?", # Surface the output that should be # reviewed and approved by the human. "llm_output": state["llm_output"] } ) if is_approved: return Command(goto="some_node") else: return Command(goto="another_node")# Add the node to the graph in an appropriate location# and connect it to the relevant nodes.graph_builder.add_node("human_approval", human_approval)graph = graph_builder.compile(checkpointer=checkpointer)# After running the graph and hitting the interrupt, the graph will pause.# Resume it with either an approval or rejection.thread_config = {"configurable": {"thread_id": "some_id"}}graph.invoke(Command(resume=True), config=thread_config)
Extended example: approve or reject with interrupt
Copy
from typing import Literal, TypedDictimport uuidfrom langgraph.constants import START, ENDfrom langgraph.graph import StateGraphfrom langgraph.types import interrupt, Commandfrom langgraph.checkpoint.memory import InMemorySaver# Define the shared graph stateclass State(TypedDict): llm_output: str decision: str# Simulate an LLM output nodedef generate_llm_output(state: State) -> State: return {"llm_output": "This is the generated output."}# Human approval nodedef human_approval(state: State) -> Command[Literal["approved_path", "rejected_path"]]: decision = interrupt({ "question": "Do you approve the following output?", "llm_output": state["llm_output"] }) if decision == "approve": return Command(goto="approved_path", update={"decision": "approved"}) else: return Command(goto="rejected_path", update={"decision": "rejected"})# Next steps after approvaldef approved_node(state: State) -> State: print("✅ Approved path taken.") return state# Alternative path after rejectiondef rejected_node(state: State) -> State: print("❌ Rejected path taken.") return state# Build the graphbuilder = StateGraph(State)builder.add_node("generate_llm_output", generate_llm_output)builder.add_node("human_approval", human_approval)builder.add_node("approved_path", approved_node)builder.add_node("rejected_path", rejected_node)builder.set_entry_point("generate_llm_output")builder.add_edge("generate_llm_output", "human_approval")builder.add_edge("approved_path", END)builder.add_edge("rejected_path", END)checkpointer = InMemorySaver()graph = builder.compile(checkpointer=checkpointer)# Run until interruptconfig = {"configurable": {"thread_id": uuid.uuid4()}}result = graph.invoke({}, config=config)print(result["__interrupt__"])# Output:# Interrupt(value={'question': 'Do you approve the following output?', 'llm_output': 'This is the generated output.'}, ...)# Simulate resuming with human input# To test rejection, replace resume="approve" with resume="reject"final_result = graph.invoke(Command(resume="approve"), config=config)print(final_result)
from langgraph.types import interruptdef human_editing(state: State): ... result = interrupt( # Interrupt information to surface to the client. # Can be any JSON serializable value. { "task": "Review the output from the LLM and make any necessary edits.", "llm_generated_summary": state["llm_generated_summary"] } ) # Update the state with the edited text return { "llm_generated_summary": result["edited_text"] }# Add the node to the graph in an appropriate location# and connect it to the relevant nodes.graph_builder.add_node("human_editing", human_editing)graph = graph_builder.compile(checkpointer=checkpointer)...# After running the graph and hitting the interrupt, the graph will pause.# Resume it with the edited text.thread_config = {"configurable": {"thread_id": "some_id"}}graph.invoke( Command(resume={"edited_text": "The edited text"}), config=thread_config)
Extended example: edit state with interrupt
Copy
from typing import TypedDictimport uuidfrom langgraph.constants import START, ENDfrom langgraph.graph import StateGraphfrom langgraph.types import interrupt, Commandfrom langgraph.checkpoint.memory import InMemorySaver# Define the graph stateclass State(TypedDict): summary: str# Simulate an LLM summary generationdef generate_summary(state: State) -> State: return { "summary": "The cat sat on the mat and looked at the stars." }# Human editing nodedef human_review_edit(state: State) -> State: result = interrupt({ "task": "Please review and edit the generated summary if necessary.", "generated_summary": state["summary"] }) return { "summary": result["edited_summary"] }# Simulate downstream use of the edited summarydef downstream_use(state: State) -> State: print(f"✅ Using edited summary: {state['summary']}") return state# Build the graphbuilder = StateGraph(State)builder.add_node("generate_summary", generate_summary)builder.add_node("human_review_edit", human_review_edit)builder.add_node("downstream_use", downstream_use)builder.set_entry_point("generate_summary")builder.add_edge("generate_summary", "human_review_edit")builder.add_edge("human_review_edit", "downstream_use")builder.add_edge("downstream_use", END)# Set up in-memory checkpointing for interrupt supportcheckpointer = InMemorySaver()graph = builder.compile(checkpointer=checkpointer)# Invoke the graph until it hits the interruptconfig = {"configurable": {"thread_id": uuid.uuid4()}}result = graph.invoke({}, config=config)# Output interrupt payloadprint(result["__interrupt__"])# Example output:# > [# > Interrupt(# > value={# > 'task': 'Please review and edit the generated summary if necessary.',# > 'generated_summary': 'The cat sat on the mat and looked at the stars.'# > },# > id='...'# > )# > ]# Resume the graph with human-edited inputedited_summary = "The cat lay on the rug, gazing peacefully at the night sky."resumed_result = graph.invoke( Command(resume={"edited_summary": edited_summary}), config=config)print(resumed_result)
Resume with a Command to continue based on human input.
Copy
from langgraph.checkpoint.memory import InMemorySaverfrom langgraph.types import interruptfrom langgraph.prebuilt import create_react_agent# An example of a sensitive tool that requires human review / approvaldef book_hotel(hotel_name: str): """Book a hotel""" response = interrupt( # (1)! f"Trying to call `book_hotel` with args {{'hotel_name': {hotel_name}}}. " "Please approve or suggest edits." ) if response["type"] == "accept": pass elif response["type"] == "edit": hotel_name = response["args"]["hotel_name"] else: raise ValueError(f"Unknown response type: {response['type']}") return f"Successfully booked a stay at {hotel_name}."checkpointer = InMemorySaver() # (2)!agent = create_react_agent( model="anthropic:claude-3-5-sonnet-latest", tools=[book_hotel], checkpointer=checkpointer, # (3)!)
The interrupt function pauses the agent graph at a specific node. In this case, we call interrupt() at the beginning of the tool function, which pauses the graph at the node that executes the tool. The information inside interrupt() (e.g., tool calls) can be presented to a human, and the graph can be resumed with the user input (tool call approval, edit or feedback).
The InMemorySaver is used to store the agent state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities. In this example, we use InMemorySaver to store the agent state in memory. In a production application, the agent state will be stored in a database.
Initialize the agent with the checkpointer.
Run the agent with the stream() method, passing the config object to specify the thread ID. This allows the agent to resume the same conversation on future invocations.
Copy
config = { "configurable": { "thread_id": "1" }}for chunk in agent.stream( {"messages": [{"role": "user", "content": "book a stay at McKittrick hotel"}]}, config): print(chunk) print("\n")
You should see that the agent runs until it reaches the interrupt() call, at which point it pauses and waits for human input.
Resume the agent with a Command to continue based on human input.
You can create a wrapper to add interrupts to any tool. The example below provides a reference implementation compatible with Agent Inbox UI and Agent Chat UI.
Wrapper that adds human-in-the-loop to any tool
Copy
from typing import Callablefrom langchain_core.tools import BaseTool, tool as create_toolfrom langchain_core.runnables import RunnableConfigfrom langgraph.types import interruptfrom langgraph.prebuilt.interrupt import HumanInterruptConfig, HumanInterruptdef add_human_in_the_loop( tool: Callable | BaseTool, *, interrupt_config: HumanInterruptConfig = None,) -> BaseTool: """Wrap a tool to support human-in-the-loop review.""" if not isinstance(tool, BaseTool): tool = create_tool(tool) if interrupt_config is None: interrupt_config = { "allow_accept": True, "allow_edit": True, "allow_respond": True, } @create_tool( # (1)! tool.name, description=tool.description, args_schema=tool.args_schema ) def call_tool_with_interrupt(config: RunnableConfig, **tool_input): request: HumanInterrupt = { "action_request": { "action": tool.name, "args": tool_input }, "config": interrupt_config, "description": "Please review the tool call" } response = interrupt([request])[0] # (2)! # approve the tool call if response["type"] == "accept": tool_response = tool.invoke(tool_input, config) # update tool call args elif response["type"] == "edit": tool_input = response["args"]["args"] tool_response = tool.invoke(tool_input, config) # respond to the LLM with user feedback elif response["type"] == "response": user_feedback = response["args"] tool_response = user_feedback else: raise ValueError(f"Unsupported interrupt response type: {response['type']}") return tool_response return call_tool_with_interrupt
This wrapper creates a new tool that calls interrupt()before executing the wrapped tool.
interrupt() is using special input and output format that’s expected by Agent Inbox UI: - a list of HumanInterrupt objects is sent to AgentInbox render interrupt information to the end user - resume value is provided by AgentInbox as a list (i.e., Command(resume=[...]))
You can use the wrapper to add interrupt() to any tool without having to add it inside the tool:
Copy
from langgraph.checkpoint.memory import InMemorySaverfrom langgraph.prebuilt import create_react_agentcheckpointer = InMemorySaver()def book_hotel(hotel_name: str): """Book a hotel""" return f"Successfully booked a stay at {hotel_name}."agent = create_react_agent( model="anthropic:claude-3-5-sonnet-latest", tools=[ add_human_in_the_loop(book_hotel), # (1)! ], checkpointer=checkpointer,)config = {"configurable": {"thread_id": "1"}}# Run the agentfor chunk in agent.stream( {"messages": [{"role": "user", "content": "book a stay at McKittrick hotel"}]}, config): print(chunk) print("\n")
The add_human_in_the_loop wrapper is used to add interrupt() to the tool. This allows the agent to pause execution and wait for human input before proceeding with the tool call.
You should see that the agent runs until it reaches the interrupt() call,
at which point it pauses and waits for human input.
Resume the agent with a Command to continue based on human input.
Copy
from langgraph.types import Commandfor chunk in agent.stream( Command(resume=[{"type": "accept"}]), # Command(resume=[{"type": "edit", "args": {"args": {"hotel_name": "McKittrick Hotel"}}}]), config): print(chunk) print("\n")
If you need to validate the input provided by the human within the graph itself (rather than on the client side), you can achieve this by using multiple interrupt calls within a single node.
Copy
from langgraph.types import interruptdef human_node(state: State): """Human node with validation.""" question = "What is your age?" while True: answer = interrupt(question) # Validate answer, if the answer isn't valid ask for input again. if not isinstance(answer, int) or answer < 0: question = f"'{answer} is not a valid age. What is your age?" answer = None continue else: # If the answer is valid, we can proceed. break print(f"The human in the loop is {answer} years old.") return { "age": answer }
Extended example: validating user input
Copy
from typing import TypedDictimport uuidfrom langgraph.constants import START, ENDfrom langgraph.graph import StateGraphfrom langgraph.types import interrupt, Commandfrom langgraph.checkpoint.memory import InMemorySaver# Define graph stateclass State(TypedDict): age: int# Node that asks for human input and validates itdef get_valid_age(state: State) -> State: prompt = "Please enter your age (must be a non-negative integer)." while True: user_input = interrupt(prompt) # Validate the input try: age = int(user_input) if age < 0: raise ValueError("Age must be non-negative.") break # Valid input received except (ValueError, TypeError): prompt = f"'{user_input}' is not valid. Please enter a non-negative integer for age." return {"age": age}# Node that uses the valid inputdef report_age(state: State) -> State: print(f"✅ Human is {state['age']} years old.") return state# Build the graphbuilder = StateGraph(State)builder.add_node("get_valid_age", get_valid_age)builder.add_node("report_age", report_age)builder.set_entry_point("get_valid_age")builder.add_edge("get_valid_age", "report_age")builder.add_edge("report_age", END)# Create the graph with a memory checkpointercheckpointer = InMemorySaver()graph = builder.compile(checkpointer=checkpointer)# Run the graph until the first interruptconfig = {"configurable": {"thread_id": uuid.uuid4()}}result = graph.invoke({}, config=config)print(result["__interrupt__"]) # First prompt: "Please enter your age..."# Simulate an invalid input (e.g., string instead of integer)result = graph.invoke(Command(resume="not a number"), config=config)print(result["__interrupt__"]) # Follow-up prompt with validation message# Simulate a second invalid input (e.g., negative number)result = graph.invoke(Command(resume="-10"), config=config)print(result["__interrupt__"]) # Another retry# Provide valid inputfinal_result = graph.invoke(Command(resume="25"), config=config)print(final_result) # Should include the valid age
To debug and test a graph, use static interrupts (also known as static breakpoints) to step through the graph execution one node at a time or to pause the graph execution at specific nodes. Static interrupts are triggered at defined points either before or after a node executes. You can set static interrupts by specifying interrupt_before and interrupt_after at compile time or run time.
Static interrupts are not recommended for human-in-the-loop workflows. Use dynamic interrupts instead.
Copy
graph = graph_builder.compile( # (1)! interrupt_before=["node_a"], # (2)! interrupt_after=["node_b", "node_c"], # (3)! checkpointer=checkpointer, # (4)!)config = { "configurable": { "thread_id": "some_thread" }}# Run the graph until the breakpointgraph.invoke(inputs, config=thread_config) # (5)!# Resume the graphgraph.invoke(None, config=thread_config) # (6)!
The breakpoints are set during compile time.
interrupt_before specifies the nodes where execution should pause before the node is executed.
interrupt_after specifies the nodes where execution should pause after the node is executed.
A checkpointer is required to enable breakpoints.
The graph is run until the first breakpoint is hit.
The graph is resumed by passing in None for the input. This will run the graph until the next breakpoint is hit.
Setting static breakpoints
Copy
from IPython.display import Image, displayfrom typing_extensions import TypedDictfrom langgraph.checkpoint.memory import InMemorySaverfrom langgraph.graph import StateGraph, START, ENDclass State(TypedDict): input: strdef step_1(state): print("---Step 1---") passdef step_2(state): print("---Step 2---") passdef step_3(state): print("---Step 3---") passbuilder = StateGraph(State)builder.add_node("step_1", step_1)builder.add_node("step_2", step_2)builder.add_node("step_3", step_3)builder.add_edge(START, "step_1")builder.add_edge("step_1", "step_2")builder.add_edge("step_2", "step_3")builder.add_edge("step_3", END)# Set up a checkpointercheckpointer = InMemorySaver() # (1)!graph = builder.compile( checkpointer=checkpointer, # (2)! interrupt_before=["step_3"] # (3)!)# Viewdisplay(Image(graph.get_graph().draw_mermaid_png()))# Inputinitial_input = {"input": "hello world"}# Threadthread = {"configurable": {"thread_id": "1"}}# Run the graph until the first interruptionfor event in graph.stream(initial_input, thread, stream_mode="values"): print(event)# This will run until the breakpoint# You can get the state of the graph at this pointprint(graph.get_state(config))# You can continue the graph execution by passing in `None` for the inputfor event in graph.stream(None, thread, stream_mode="values"): print(event)
You can use LangGraph Studio to debug your graph. You can set static breakpoints in the UI and then run the graph. You can also use the UI to inspect the graph state at any point in the execution.LangGraph Studio is free with locally deployed applications using langgraph dev.
Place code with side effects, such as API calls, after the interrupt or in a separate node to avoid duplication, as these are re-triggered every time the node is resumed.
Copy
from langgraph.types import interruptdef human_node(state: State): """Human node with validation.""" answer = interrupt(question) api_call(answer) # OK as it's after the interrupt
When invoking a subgraph as a function, the parent graph will resume execution from the beginning of the node where the subgraph was invoked where the interrupt was triggered. Similarly, the subgraph will resume from the beginning of the node where the interrupt() function was called.
Copy
def node_in_parent_graph(state: State): some_code() # <-- This will re-execute when the subgraph is resumed. # Invoke a subgraph as a function. # The subgraph contains an `interrupt` call. subgraph_result = subgraph.invoke(some_input) ...
Extended example: parent and subgraph execution flow
Say we have a parent graph with 3 nodes:Parent Graph: node_1 → node_2 (subgraph call) → node_3And the subgraph has 3 nodes, where the second node contains an interrupt:Subgraph: sub_node_1 → sub_node_2 (interrupt) → sub_node_3When resuming the graph, the execution will proceed as follows:
Skip node_1 in the parent graph (already executed, graph state was saved in snapshot).
Re-execute node_2 in the parent graph from the start.
Skip sub_node_1 in the subgraph (already executed, graph state was saved in snapshot).
Re-execute sub_node_2 in the subgraph from the beginning.
Continue with sub_node_3 and subsequent nodes.
Here is abbreviated example code that you can use to understand how subgraphs work with interrupts.
It counts the number of times each node is entered and prints the count.
Copy
import uuidfrom typing import TypedDictfrom langgraph.graph import StateGraphfrom langgraph.constants import STARTfrom langgraph.types import interrupt, Commandfrom langgraph.checkpoint.memory import InMemorySaverclass State(TypedDict): """The graph state.""" state_counter: intcounter_node_in_subgraph = 0def node_in_subgraph(state: State): """A node in the sub-graph.""" global counter_node_in_subgraph counter_node_in_subgraph += 1 # This code will **NOT** run again! print(f"Entered `node_in_subgraph` a total of {counter_node_in_subgraph} times")counter_human_node = 0def human_node(state: State): global counter_human_node counter_human_node += 1 # This code will run again! print(f"Entered human_node in sub-graph a total of {counter_human_node} times") answer = interrupt("what is your name?") print(f"Got an answer of {answer}")checkpointer = InMemorySaver()subgraph_builder = StateGraph(State)subgraph_builder.add_node("some_node", node_in_subgraph)subgraph_builder.add_node("human_node", human_node)subgraph_builder.add_edge(START, "some_node")subgraph_builder.add_edge("some_node", "human_node")subgraph = subgraph_builder.compile(checkpointer=checkpointer)counter_parent_node = 0def parent_node(state: State): """This parent node will invoke the subgraph.""" global counter_parent_node counter_parent_node += 1 # This code will run again on resuming! print(f"Entered `parent_node` a total of {counter_parent_node} times") # Please note that we're intentionally incrementing the state counter # in the graph state as well to demonstrate that the subgraph update # of the same key will not conflict with the parent graph (until subgraph_state = subgraph.invoke(state) return subgraph_statebuilder = StateGraph(State)builder.add_node("parent_node", parent_node)builder.add_edge(START, "parent_node")# A checkpointer must be enabled for interrupts to work!checkpointer = InMemorySaver()graph = builder.compile(checkpointer=checkpointer)config = { "configurable": { "thread_id": uuid.uuid4(), }}for chunk in graph.stream({"state_counter": 1}, config): print(chunk)print('--- Resuming ---')for chunk in graph.stream(Command(resume="35"), config): print(chunk)
This will print out
Copy
Entered `parent_node` a total of 1 timesEntered `node_in_subgraph` a total of 1 timesEntered human_node in sub-graph a total of 1 times{'__interrupt__': (Interrupt(value='what is your name?', id='...'),)}--- Resuming ---Entered `parent_node` a total of 2 timesEntered human_node in sub-graph a total of 2 timesGot an answer of 35{'parent_node': {'state_counter': 1}}
Using multiple interrupts within a single node can be helpful for patterns like validating human input. However, using multiple interrupts in the same node can lead to unexpected behavior if not handled carefully.When a node contains multiple interrupt calls, LangGraph keeps a list of resume values specific to the task executing the node. Whenever execution resumes, it starts at the beginning of the node. For each interrupt encountered, LangGraph checks if a matching value exists in the task’s resume list. Matching is strictly index-based, so the order of interrupt calls within the node is critical.To avoid issues, refrain from dynamically changing the node’s structure between executions. This includes adding, removing, or reordering interrupt calls, as such changes can result in mismatched indices. These problems often arise from unconventional patterns, such as mutating state via Command(resume=..., update=SOME_STATE_MUTATION) or relying on global variables to modify the node’s structure dynamically.
Extended example: incorrect code that introduces non-determinism
Copy
import uuidfrom typing import TypedDict, Optionalfrom langgraph.graph import StateGraphfrom langgraph.constants import STARTfrom langgraph.types import interrupt, Commandfrom langgraph.checkpoint.memory import InMemorySaverclass State(TypedDict): """The graph state.""" age: Optional[str] name: Optional[str]def human_node(state: State): if not state.get('name'): name = interrupt("what is your name?") else: name = "N/A" if not state.get('age'): age = interrupt("what is your age?") else: age = "N/A" print(f"Name: {name}. Age: {age}") return { "age": age, "name": name, }builder = StateGraph(State)builder.add_node("human_node", human_node)builder.add_edge(START, "human_node")# A checkpointer must be enabled for interrupts to work!checkpointer = InMemorySaver()graph = builder.compile(checkpointer=checkpointer)config = { "configurable": { "thread_id": uuid.uuid4(), }}for chunk in graph.stream({"age": None, "name": None}, config): print(chunk)for chunk in graph.stream(Command(resume="John", update={"name": "foo"}), config): print(chunk)
Copy
{'__interrupt__': (Interrupt(value='what is your name?', id='...'),)}Name: N/A. Age: John{'human_node': {'age': 'John', 'name': 'N/A'}}