Skip to main content
LangChain v1.0Welcome to the new LangChain documentation! If you encounter any issues or have feedback, please open an issue so we can improve. Archived v0 documentation can be found here.See the release notes and migration guide for a complete list of changes and instructions on how to upgrade your code.
The Human-in-the-Loop (HITL) middleware lets you add human oversight to agent tool calls. When a model proposes an action that might require review — for example, writing to a file or executing SQL — the middleware can pause execution and wait for a decision. It does this by checking each tool call against a configurable policy. If intervention is needed, the middleware issues an interrupt that halts execution. The graph state is saved using LangGraph’s persistence layer, so execution can pause safely and resume later. A human decision then determines what happens next: the action can be approved as-is (approve), modified before running (edit), or rejected with feedback (reject).

Interrupt decision types

The middleware defines three built-in ways a human can respond to an interrupt:
Decision TypeDescriptionExample Use Case
approveThe action is approved as-is and executed without changes.Send an email draft exactly as written
✏️ editThe tool call is executed with modifications.Change the recipient before sending an email
rejectThe tool call is rejected, with an explanation added to the conversation.Reject an email draft and explain how to rewrite it
The available decision types for each tool depend on the policy you configure in interrupt_on. When multiple tool calls are paused at the same time, each action requires a separate decision. Decisions must be provided in the same order as the actions appear in the interrupt request.
When editing tool arguments, make changes conservatively. Significant modifications to the original arguments may cause the model to re-evaluate its approach and potentially execute the tool multiple times or take unexpected actions.

Configuring interrupts

To use HITL, add the middleware to the agent’s middleware list when creating the agent. You configure it with a mapping of tool actions to the decision types that are allowed for each action. The middleware will interrupt execution when a tool call matches an action in the mapping.
import { createAgent, humanInTheLoopMiddleware } from "langchain"; 
import { MemorySaver } from "@langchain/langgraph"; 

const agent = createAgent({
    model: "openai:gpt-4o",
    tools: [writeFileTool, executeSQLTool, readDataTool],
    middleware: [
        humanInTheLoopMiddleware({
            interruptOn: {
                write_file: true, // All decisions (approve, edit, reject) allowed
                execute_sql: {
                    allowedDecisions: ["approve", "reject"],
                    // No editing allowed
                    description: "🚨 SQL execution requires DBA approval",
                },
                // Safe operation, no approval needed
                read_data: false,
            },
            // Prefix for interrupt messages - combined with tool name and args to form the full message
            // e.g., "Tool execution pending approval: execute_sql with query='DELETE FROM...'"
            // Individual tools can override this by specifying a "description" in their interrupt config
            descriptionPrefix: "Tool execution pending approval",
        }),
    ],
    // Human-in-the-loop requires checkpointing to handle interrupts.
    // In production, use a persistent checkpointer like AsyncPostgresSaver.
    checkpointer: new MemorySaver(), 
});
You must configure a checkpointer to persist the graph state across interrupts. In production, use a persistent checkpointer like @[AsyncPostgresSaver]. For testing or prototyping, use @[InMemorySaver].When invoking the agent, pass a config that includes the thread ID to associate execution with a conversation thread. See the LangGraph interrupts documentation for details.

Responding to interrupts

When you invoke the agent, it runs until it either completes or an interrupt is raised. An interrupt is triggered when a tool call matches the policy you configured in interrupt_on. In that case, the invocation result will include an __interrupt__ field with the actions that require review. You can then present those actions to a reviewer and resume execution once decisions are provided.
import { HumanMessage } from "@langchain/core/messages";
import { Command } from "@langchain/langgraph";

// You must provide a thread ID to associate the execution with a conversation thread,
// so the conversation can be paused and resumed (as is needed for human review).
const config = { configurable: { thread_id: "some_id" } }; 

// Run the graph until the interrupt is hit.
const result = await agent.invoke(
    {
        messages: [new HumanMessage("Delete old records from the database")],
    },
    config
);


// The interrupt contains the full HITL request with action_requests and review_configs
console.log(result.__interrupt__);
// > [
// >    Interrupt(
// >       value: {
// >          action_requests: [
// >             {
// >                name: 'execute_sql',
// >                arguments: { query: 'DELETE FROM records WHERE created_at < NOW() - INTERVAL \'30 days\';' },
// >                description: 'Tool execution pending approval\n\nTool: execute_sql\nArgs: {...}'
// >             }
// >          ],
// >          review_configs: [
// >             {
// >                action_name: 'execute_sql',
// >                allowed_decisions: ['approve', 'reject']
// >             }
// >          ]
// >       }
// >    )
// > ]

// Resume with approval decision
await agent.invoke(
    new Command({ 
        resume: { decisions: [{ type: "approve" }] }, // or "edit", "reject"
    }), 
    config // Same thread ID to resume the paused conversation
);

Decision types

  • ✅ approve
  • ✏️ edit
  • ❌ reject
Use approve to approve the tool call as-is and execute it without changes.
await agent.invoke(
    new Command({
        // Decisions are provided as a list, one per action under review.
        // The order of decisions must match the order of actions
        // listed in the `__interrupt__` request.
        resume: {
            decisions: [
                {
                    type: "approve",
                }
            ]
        }
    }),
    config  // Same thread ID to resume the paused conversation
);

Execution lifecycle

The middleware defines an after_model hook that runs after the model generates a response but before any tool calls are executed:
  1. The agent invokes the model to generate a response.
  2. The middleware inspects the response for tool calls.
  3. If any calls require human input, the middleware builds a HITLRequest with action_requests and review_configs and calls interrupt.
  4. The agent waits for human decisions.
  5. Based on the HITLResponse decisions, the middleware executes approved or edited calls, synthesizes @[ToolMessage]‘s for rejected calls, and resumes execution.

Custom HITL logic

For more specialized workflows, you can build custom HITL logic directly using the interrupt primitive and middleware abstraction. Review the execution lifecycle above to understand how to integrate interrupts into the agent’s operation.
I