Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the current LangGraph Python or LangGraph JavaScript docs.
To review, edit, and approve tool calls in an agent or workflow, use interrupts to pause a graph and wait for human input. Interrupts use LangGraph’s persistence layer, which saves the graph state, to indefinitely pause graph execution until you resume.
Dynamic interrupts (also known as dynamic breakpoints) are triggered based on the current state of the graph. You can set dynamic interrupts by calling interrupt function in the appropriate place. The graph will pause, which allows for human intervention, and then resumes the graph with their input. It’s useful for tasks like approvals, edits, or gathering additional context.To use interrupt in your graph, you need to:
interrupt(...) pauses execution at humanNode, surfacing the given payload to a human.
Any JSON serializable value can be passed to the interrupt function. Here, an object containing the text to revise.
Once resumed, the return value of interrupt(...) is the human-provided input, which is used to update the state.
A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an object with __interrupt__ containing the payload and metadata.
The graph is resumed with a Command({ resume: ... }), injecting the human’s input and continuing execution.
Extended example: using `interrupt`
Copy
import * as z from "zod";import { v4 as uuidv4 } from "uuid";import { MemorySaver, StateGraph, START, interrupt, Command } from "@langchain/langgraph";const StateAnnotation = z.object({ someText: z.string(),});// Build the graphconst graphBuilder = new StateGraph(StateAnnotation) .addNode("humanNode", (state) => { const value = interrupt( // (1)! { textToRevise: state.someText // (2)! } ); return { someText: value // (3)! }; }) .addEdge(START, "humanNode");const checkpointer = new MemorySaver(); // (4)!const graph = graphBuilder.compile({ checkpointer });// Pass a thread ID to the graph to run it.const config = { configurable: { thread_id: uuidv4() } };// Run the graph until the interrupt is hit.const result = await graph.invoke({ someText: "original text" }, config); // (5)!console.log(result.__interrupt__); // (6)!// > [// > {// > value: { textToRevise: 'original text' },// > resumable: true,// > ns: ['humanNode:6ce9e64f-edef-fe5d-f7dc-511fa9526960'],// > when: 'during'// > }// > ]console.log(await graph.invoke(new Command({ resume: "Edited text" }), config)); // (7)!// > { someText: 'Edited text' }
interrupt(...) pauses execution at humanNode, surfacing the given payload to a human.
Any JSON serializable value can be passed to the interrupt function. Here, an object containing the text to revise.
Once resumed, the return value of interrupt(...) is the human-provided input, which is used to update the state.
A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an object with __interrupt__ containing the payload and metadata.
The graph is resumed with a Command({ resume: ... }), injecting the human’s input and continuing execution.
Interrupts are both powerful and ergonomic, but it’s important to note that they do not automatically resume execution from the interrupt point. Instead, they rerun the entire where the interrupt was used. For this reason, interrupts are typically best placed at the state of a node or in a dedicated node.
When the interrupt function is used within a graph, execution pauses at that point and awaits user input.To resume execution, use the Command primitive, which can be supplied via the invoke or stream methods. The graph resumes execution from the beginning of the node where interrupt(...) was initially called. This time, the interrupt function will return the value provided in Command(resume=value) rather than pausing again. All code from the beginning of the node to the interrupt will be re-executed.
Copy
// Resume graph execution by providing the user's input.await graph.invoke(new Command({ resume: { age: "25" } }), threadConfig);
When nodes with interrupt conditions are run in parallel, it’s possible to have multiple interrupts in the task queue.
For example, the following graph has two nodes run in parallel that require human input:
There are four typical design patterns that you can implement using interrupt and Command:
Approve or reject: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involves routing the graph based on the human’s input.
Edit graph state: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves updating the state with the human’s input.
Review tool calls: Pause the graph to review and edit tool calls requested by the LLM before tool execution.
Validate human input: Pause the graph to validate human input before proceeding with the next step.
Below we show different design patterns that can be implemented using interrupt and Command.
Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
Copy
import { interrupt, Command } from "@langchain/langgraph";// Add the node to the graph in an appropriate location// and connect it to the relevant nodes.graphBuilder.addNode("humanApproval", (state) => { const isApproved = interrupt({ question: "Is this correct?", // Surface the output that should be // reviewed and approved by the human. llmOutput: state.llmOutput, }); if (isApproved) { return new Command({ goto: "someNode" }); } else { return new Command({ goto: "anotherNode" }); }});const graph = graphBuilder.compile({ checkpointer });// After running the graph and hitting the interrupt, the graph will pause.// Resume it with either an approval or rejection.const threadConfig = { configurable: { thread_id: "some_id" } };await graph.invoke(new Command({ resume: true }), threadConfig);
Extended example: approve or reject with interrupt
Copy
import * as z from "zod";import { v4 as uuidv4 } from "uuid";import { StateGraph, START, END, interrupt, Command, MemorySaver} from "@langchain/langgraph";// Define the shared graph stateconst StateAnnotation = z.object({ llmOutput: z.string(), decision: z.string(),});// Simulate an LLM output nodefunction generateLlmOutput(state: z.infer<typeof StateAnnotation>) { return { llmOutput: "This is the generated output." };}// Human approval nodefunction humanApproval(state: z.infer<typeof StateAnnotation>): Command { const decision = interrupt({ question: "Do you approve the following output?", llmOutput: state.llmOutput }); if (decision === "approve") { return new Command({ goto: "approvedPath", update: { decision: "approved" } }); } else { return new Command({ goto: "rejectedPath", update: { decision: "rejected" } }); }}// Next steps after approvalfunction approvedNode(state: z.infer<typeof StateAnnotation>) { console.log("✅ Approved path taken."); return state;}// Alternative path after rejectionfunction rejectedNode(state: z.infer<typeof StateAnnotation>) { console.log("❌ Rejected path taken."); return state;}// Build the graphconst builder = new StateGraph(StateAnnotation) .addNode("generateLlmOutput", generateLlmOutput) .addNode("humanApproval", humanApproval, { ends: ["approvedPath", "rejectedPath"] }) .addNode("approvedPath", approvedNode) .addNode("rejectedPath", rejectedNode) .addEdge(START, "generateLlmOutput") .addEdge("generateLlmOutput", "humanApproval") .addEdge("approvedPath", END) .addEdge("rejectedPath", END);const checkpointer = new MemorySaver();const graph = builder.compile({ checkpointer });// Run until interruptconst config = { configurable: { thread_id: uuidv4() } };const result = await graph.invoke({}, config);console.log(result.__interrupt__);// Output:// [{// value: {// question: 'Do you approve the following output?',// llmOutput: 'This is the generated output.'// },// ...// }]// Simulate resuming with human input// To test rejection, replace resume: "approve" with resume: "reject"const finalResult = await graph.invoke( new Command({ resume: "approve" }), config);console.log(finalResult);
import { interrupt } from "@langchain/langgraph";function humanEditing(state: z.infer<typeof StateAnnotation>) { const result = interrupt({ // Interrupt information to surface to the client. // Can be any JSON serializable value. task: "Review the output from the LLM and make any necessary edits.", llmGeneratedSummary: state.llmGeneratedSummary, }); // Update the state with the edited text return { llmGeneratedSummary: result.editedText, };}// Add the node to the graph in an appropriate location// and connect it to the relevant nodes.graphBuilder.addNode("humanEditing", humanEditing);const graph = graphBuilder.compile({ checkpointer });// After running the graph and hitting the interrupt, the graph will pause.// Resume it with the edited text.const threadConfig = { configurable: { thread_id: "some_id" } };await graph.invoke( new Command({ resume: { editedText: "The edited text" } }), threadConfig);
Extended example: edit state with interrupt
Copy
import * as z from "zod";import { v4 as uuidv4 } from "uuid";import { StateGraph, START, END, interrupt, Command, MemorySaver} from "@langchain/langgraph";// Define the graph stateconst StateAnnotation = z.object({ summary: z.string(),});// Simulate an LLM summary generationfunction generateSummary(state: z.infer<typeof StateAnnotation>) { return { summary: "The cat sat on the mat and looked at the stars." };}// Human editing nodefunction humanReviewEdit(state: z.infer<typeof StateAnnotation>) { const result = interrupt({ task: "Please review and edit the generated summary if necessary.", generatedSummary: state.summary }); return { summary: result.editedSummary };}// Simulate downstream use of the edited summaryfunction downstreamUse(state: z.infer<typeof StateAnnotation>) { console.log(`✅ Using edited summary: ${state.summary}`); return state;}// Build the graphconst builder = new StateGraph(StateAnnotation) .addNode("generateSummary", generateSummary) .addNode("humanReviewEdit", humanReviewEdit) .addNode("downstreamUse", downstreamUse) .addEdge(START, "generateSummary") .addEdge("generateSummary", "humanReviewEdit") .addEdge("humanReviewEdit", "downstreamUse") .addEdge("downstreamUse", END);// Set up in-memory checkpointing for interrupt supportconst checkpointer = new MemorySaver();const graph = builder.compile({ checkpointer });// Invoke the graph until it hits the interruptconst config = { configurable: { thread_id: uuidv4() } };const result = await graph.invoke({}, config);// Output interrupt payloadconsole.log(result.__interrupt__);// Example output:// [{// value: {// task: 'Please review and edit the generated summary if necessary.',// generatedSummary: 'The cat sat on the mat and looked at the stars.'// },// resumable: true,// ...// }]// Resume the graph with human-edited inputconst editedSummary = "The cat lay on the rug, gazing peacefully at the night sky.";const resumedResult = await graph.invoke( new Command({ resume: { editedSummary } }), config);console.log(resumedResult);
Resume with a Command to continue based on human input.
Copy
import { MemorySaver } from "@langchain/langgraph";import { interrupt } from "@langchain/langgraph";import { createReactAgent } from "@langchain/langgraph/prebuilt";import { tool } from "@langchain/core/tools";import * as z from "zod";// An example of a sensitive tool that requires human review / approvalconst bookHotel = tool( async ({ hotelName }) => { const response = interrupt( // (1)! `Trying to call \`bookHotel\` with args {"hotelName": "${hotelName}"}. ` + "Please approve or suggest edits." ); if (response.type === "accept") { // Continue with original args } else if (response.type === "edit") { hotelName = response.args.hotelName; } else { throw new Error(`Unknown response type: ${response.type}`); } return `Successfully booked a stay at ${hotelName}.`; }, { name: "bookHotel", description: "Book a hotel", schema: z.object({ hotelName: z.string(), }), });const checkpointer = new MemorySaver(); // (2)!const agent = createReactAgent({ llm: model, tools: [bookHotel], checkpointSaver: checkpointer, // (3)!});
The interrupt function pauses the agent graph at a specific node. In this case, we call interrupt() at the beginning of the tool function, which pauses the graph at the node that executes the tool. The information inside interrupt() (e.g., tool calls) can be presented to a human, and the graph can be resumed with the user input (tool call approval, edit or feedback).
The MemorySaver is used to store the agent state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities. In this example, we use MemorySaver to store the agent state in memory. In a production application, the agent state will be stored in a database.
Initialize the agent with the checkpointSaver.
Run the agent with the stream() method, passing the config object to specify the thread ID. This allows the agent to resume the same conversation on future invocations.
You can create a wrapper to add interrupts to any tool. The example below provides a reference implementation compatible with Agent Inbox UI and Agent Chat UI.
This wrapper creates a new tool that calls interrupt()before executing the wrapped tool.
interrupt() is using special input and output format that’s expected by Agent Inbox UI: - a list of [HumanInterrupt] objects is sent to AgentInbox render interrupt information to the end user - resume value is provided by AgentInbox as a list (i.e., Command({ resume: [...] }))
You can use the wrapper to add interrupt() to any tool without having to add it inside the tool:
Copy
import { MemorySaver } from "@langchain/langgraph";import { createReactAgent } from "@langchain/langgraph/prebuilt";import { tool } from "@langchain/core/tools";import * as z from "zod";const checkpointer = new MemorySaver();const bookHotel = tool( async ({ hotelName }) => { return `Successfully booked a stay at ${hotelName}.`; }, { name: "bookHotel", description: "Book a hotel", schema: z.object({ hotelName: z.string(), }), });const agent = createReactAgent({ llm: model, tools: [ addHumanInTheLoop(bookHotel), // (1)! ], checkpointSaver: checkpointer,});const config = { configurable: { thread_id: "1" } };// Run the agentconst stream = await agent.stream( { messages: [{ role: "user", content: "book a stay at McKittrick hotel" }] }, config);for await (const chunk of stream) { console.log(chunk); console.log("\n");}
The addHumanInTheLoop wrapper is used to add interrupt() to the tool. This allows the agent to pause execution and wait for human input before proceeding with the tool call.
You should see that the agent runs until it reaches the interrupt() call,
at which point it pauses and waits for human input.
Resume the agent with a Command to continue based on human input.
If you need to validate the input provided by the human within the graph itself (rather than on the client side), you can achieve this by using multiple interrupt calls within a single node.
Copy
import { interrupt } from "@langchain/langgraph";graphBuilder.addNode("humanNode", (state) => { // Human node with validation. let question = "What is your age?"; while (true) { const answer = interrupt(question); // Validate answer, if the answer isn't valid ask for input again. if (typeof answer !== "number" || answer < 0) { question = `'${answer}' is not a valid age. What is your age?`; continue; } else { // If the answer is valid, we can proceed. break; } } console.log(`The human in the loop is ${answer} years old.`); return { age: answer, };});
Extended example: validating user input
Copy
import * as z from "zod";import { v4 as uuidv4 } from "uuid";import { StateGraph, START, END, interrupt, Command, MemorySaver} from "@langchain/langgraph";// Define graph stateconst StateAnnotation = z.object({ age: z.number(),});// Node that asks for human input and validates itfunction getValidAge(state: z.infer<typeof StateAnnotation>) { let prompt = "Please enter your age (must be a non-negative integer)."; while (true) { const userInput = interrupt(prompt); // Validate the input try { const age = parseInt(userInput as string); if (isNaN(age) || age < 0) { throw new Error("Age must be non-negative."); } return { age }; } catch (error) { prompt = `'${userInput}' is not valid. Please enter a non-negative integer for age.`; } }}// Node that uses the valid inputfunction reportAge(state: z.infer<typeof StateAnnotation>) { console.log(`✅ Human is ${state.age} years old.`); return state;}// Build the graphconst builder = new StateGraph(StateAnnotation) .addNode("getValidAge", getValidAge) .addNode("reportAge", reportAge) .addEdge(START, "getValidAge") .addEdge("getValidAge", "reportAge") .addEdge("reportAge", END);// Create the graph with a memory checkpointerconst checkpointer = new MemorySaver();const graph = builder.compile({ checkpointer });// Run the graph until the first interruptconst config = { configurable: { thread_id: uuidv4() } };let result = await graph.invoke({}, config);console.log(result.__interrupt__); // First prompt: "Please enter your age..."// Simulate an invalid input (e.g., string instead of integer)result = await graph.invoke(new Command({ resume: "not a number" }), config);console.log(result.__interrupt__); // Follow-up prompt with validation message// Simulate a second invalid input (e.g., negative number)result = await graph.invoke(new Command({ resume: "-10" }), config);console.log(result.__interrupt__); // Another retry// Provide valid inputconst finalResult = await graph.invoke(new Command({ resume: "25" }), config);console.log(finalResult); // Should include the valid age
Place code with side effects, such as API calls, after the interrupt or in a separate node to avoid duplication, as these are re-triggered every time the node is resumed.
Copy
import { interrupt } from "@langchain/langgraph";function humanNode(state: z.infer<typeof StateAnnotation>) { // Human node with validation. const answer = interrupt(question); apiCall(answer); // OK as it's after the interrupt}
When invoking a subgraph as a function, the parent graph will resume execution from the beginning of the node where the subgraph was invoked where the interrupt was triggered. Similarly, the subgraph will resume from the beginning of the node where the interrupt() function was called.
Copy
async function nodeInParentGraph(state: z.infer<typeof StateAnnotation>) { someCode(); // <-- This will re-execute when the subgraph is resumed. // Invoke a subgraph as a function. // The subgraph contains an `interrupt` call. const subgraphResult = await subgraph.invoke(someInput); // ...}
Extended example: parent and subgraph execution flow
Say we have a parent graph with 3 nodes:Parent Graph: node_1 → node_2 (subgraph call) → node_3And the subgraph has 3 nodes, where the second node contains an interrupt:Subgraph: sub_node_1 → sub_node_2 (interrupt) → sub_node_3When resuming the graph, the execution will proceed as follows:
Skip node_1 in the parent graph (already executed, graph state was saved in snapshot).
Re-execute node_2 in the parent graph from the start.
Skip sub_node_1 in the subgraph (already executed, graph state was saved in snapshot).
Re-execute sub_node_2 in the subgraph from the beginning.
Continue with sub_node_3 and subsequent nodes.
Here is abbreviated example code that you can use to understand how subgraphs work with interrupts.
It counts the number of times each node is entered and prints the count.
Copy
import { v4 as uuidv4 } from "uuid";import { StateGraph, START, interrupt, Command, MemorySaver} from "@langchain/langgraph";import * as z from "zod";const StateAnnotation = z.object({ stateCounter: z.number(),});// Global variable to track the number of attemptslet counterNodeInSubgraph = 0;function nodeInSubgraph(state: z.infer<typeof StateAnnotation>) { // A node in the sub-graph. counterNodeInSubgraph += 1; // This code will **NOT** run again! console.log(`Entered 'nodeInSubgraph' a total of ${counterNodeInSubgraph} times`); return {};}let counterHumanNode = 0;function humanNode(state: z.infer<typeof StateAnnotation>) { counterHumanNode += 1; // This code will run again! console.log(`Entered humanNode in sub-graph a total of ${counterHumanNode} times`); const answer = interrupt("what is your name?"); console.log(`Got an answer of ${answer}`); return {};}const checkpointer = new MemorySaver();const subgraphBuilder = new StateGraph(StateAnnotation) .addNode("someNode", nodeInSubgraph) .addNode("humanNode", humanNode) .addEdge(START, "someNode") .addEdge("someNode", "humanNode");const subgraph = subgraphBuilder.compile({ checkpointer });let counterParentNode = 0;async function parentNode(state: z.infer<typeof StateAnnotation>) { // This parent node will invoke the subgraph. counterParentNode += 1; // This code will run again on resuming! console.log(`Entered 'parentNode' a total of ${counterParentNode} times`); // Please note that we're intentionally incrementing the state counter // in the graph state as well to demonstrate that the subgraph update // of the same key will not conflict with the parent graph (until const subgraphState = await subgraph.invoke(state); return subgraphState;}const builder = new StateGraph(StateAnnotation) .addNode("parentNode", parentNode) .addEdge(START, "parentNode");// A checkpointer must be enabled for interrupts to work!const graph = builder.compile({ checkpointer });const config = { configurable: { thread_id: uuidv4(), }};const stream = await graph.stream({ stateCounter: 1 }, config);for await (const chunk of stream) { console.log(chunk);}console.log('--- Resuming ---');const resumeStream = await graph.stream(new Command({ resume: "35" }), config);for await (const chunk of resumeStream) { console.log(chunk);}
This will print out
Copy
Entered 'parentNode' a total of 1 timesEntered 'nodeInSubgraph' a total of 1 timesEntered humanNode in sub-graph a total of 1 times{ __interrupt__: [{ value: 'what is your name?', resumable: true, ns: ['parentNode:4c3a0248-21f0-1287-eacf-3002bc304db4', 'humanNode:2fe86d52-6f70-2a3f-6b2f-b1eededd6348'], when: 'during' }] }--- Resuming ---Entered 'parentNode' a total of 2 timesEntered humanNode in sub-graph a total of 2 timesGot an answer of 35{ parentNode: null }
Using multiple interrupts within a single node can be helpful for patterns like validating human input. However, using multiple interrupts in the same node can lead to unexpected behavior if not handled carefully.When a node contains multiple interrupt calls, LangGraph keeps a list of resume values specific to the task executing the node. Whenever execution resumes, it starts at the beginning of the node. For each interrupt encountered, LangGraph checks if a matching value exists in the task’s resume list. Matching is strictly index-based, so the order of interrupt calls within the node is critical.To avoid issues, refrain from dynamically changing the node’s structure between executions. This includes adding, removing, or reordering interrupt calls, as such changes can result in mismatched indices. These problems often arise from unconventional patterns, such as mutating state via Command(resume=..., update=SOME_STATE_MUTATION) or relying on global variables to modify the node’s structure dynamically.