A single agent might struggle if it needs to specialize in multiple domains or manage many tools. To tackle this, you can break your agent into smaller, independent agents and composing them into a multi-agent system. In multi-agent systems, agents need to communicate between each other. They do so via handoffs — a primitive that describes which agent to hand control to and the payload to send to that agent. This guide covers the following: To get started with building multi-agent systems, check out LangGraph prebuilt implementations of two of the most popular multi-agent architectures — supervisor and swarm.

Handoffs

To set up communication between the agents in a multi-agent system you can use handoffs — a pattern where one agent hands off control to another. Handoffs allow you to specify:
  • destination: target agent to navigate to (e.g., name of the LangGraph node to go to)
  • payload: information to pass to that agent (e.g., state update)

Create handoffs

To implement handoffs, you can return Command objects from your agent nodes or tools:
from typing import Annotated
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import create_react_agent, InjectedState
from langgraph.graph import StateGraph, START, MessagesState
from langgraph.types import Command

def create_handoff_tool(*, agent_name: str, description: str | None = None):
    name = f"transfer_to_{agent_name}"
    description = description or f"Transfer to {agent_name}"

    @tool(name, description=description)
    def handoff_tool(
        # highlight-next-line
        state: Annotated[MessagesState, InjectedState], # (1)!
        # highlight-next-line
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        tool_message = {
            "role": "tool",
            "content": f"Successfully transferred to {agent_name}",
            "name": name,
            "tool_call_id": tool_call_id,
        }
        return Command(  # (2)!
            # highlight-next-line
            goto=agent_name,  # (3)!
            # highlight-next-line
            update={"messages": state["messages"] + [tool_message]},  # (4)!
            # highlight-next-line
            graph=Command.PARENT,  # (5)!
        )
    return handoff_tool
  1. Access the state of the agent that is calling the handoff tool using the InjectedState annotation.
  2. The Command primitive allows specifying a state update and a node transition as a single operation, making it useful for implementing handoffs.
  3. Name of the agent or node to hand off to.
  4. Take the agent’s messages and add them to the parent’s state as part of the handoff. The next agent will see the parent state.
  5. Indicate to LangGraph that we need to navigate to agent node in a parent multi-agent graph.
If you want to use tools that return Command, you can either use prebuilt create_react_agent / ToolNode components, or implement your own tool-executing node that collects Command objects returned by the tools and returns a list of them, e.g.:
def call_tools(state):
    ...
    commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls]
    return commands
This handoff implementation assumes that:
  • each agent receives overall message history (across all agents) in the multi-agent system as its input. If you want more control over agent inputs, see this section
  • each agent outputs its internal messages history to the overall message history of the multi-agent system. If you want more control over how agent outputs are added, wrap the agent in a separate node function:
    def call_hotel_assistant(state):
        # return agent's final response,
        # excluding inner monologue
        response = hotel_assistant.invoke(state)
        # highlight-next-line
        return {"messages": response["messages"][-1]}
    

Control agent inputs

You can use the Send() primitive to directly send data to the worker agents during the handoff. For example, you can request that the calling agent populate a task description for the next agent:

from typing import Annotated
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import InjectedState
from langgraph.graph import StateGraph, START, MessagesState
# highlight-next-line
from langgraph.types import Command, Send

def create_task_description_handoff_tool(
    *, agent_name: str, description: str | None = None
):
    name = f"transfer_to_{agent_name}"
    description = description or f"Ask {agent_name} for help."

    @tool(name, description=description)
    def handoff_tool(
        # this is populated by the calling agent
        task_description: Annotated[
            str,
            "Description of what the next agent should do, including all of the relevant context.",
        ],
        # these parameters are ignored by the LLM
        state: Annotated[MessagesState, InjectedState],
    ) -> Command:
        task_description_message = {"role": "user", "content": task_description}
        agent_input = {**state, "messages": [task_description_message]}
        return Command(
            # highlight-next-line
            goto=[Send(agent_name, agent_input)],
            graph=Command.PARENT,
        )

    return handoff_tool
See the multi-agent supervisor example for a full example of using Send() in handoffs.

Build a multi-agent system

You can use handoffs in any agents built with LangGraph. We recommend using the prebuilt agent or ToolNode, as they natively support handoffs tools returning Command. Below is an example of how you can implement a multi-agent system for booking travel using handoffs:
from langgraph.prebuilt import create_react_agent
from langgraph.graph import StateGraph, START, MessagesState

def create_handoff_tool(*, agent_name: str, description: str | None = None):
    # same implementation as above
    ...
    return Command(...)

# Handoffs
transfer_to_hotel_assistant = create_handoff_tool(agent_name="hotel_assistant")
transfer_to_flight_assistant = create_handoff_tool(agent_name="flight_assistant")

# Define agents
flight_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    # highlight-next-line
    tools=[..., transfer_to_hotel_assistant],
    # highlight-next-line
    name="flight_assistant"
)
hotel_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    # highlight-next-line
    tools=[..., transfer_to_flight_assistant],
    # highlight-next-line
    name="hotel_assistant"
)

# Define multi-agent graph
multi_agent_graph = (
    StateGraph(MessagesState)
    # highlight-next-line
    .add_node(flight_assistant)
    # highlight-next-line
    .add_node(hotel_assistant)
    .add_edge(START, "flight_assistant")
    .compile()
)

Multi-turn conversation

Users might want to engage in a multi-turn conversation with one or more agents. To build a system that can handle this, you can create a node that uses an interrupt to collect user input and routes back to the active agent. The agents can then be implemented as nodes in a graph that executes agent steps and determines the next action:
  1. Wait for user input to continue the conversation, or
  2. Route to another agent (or back to itself, such as in a loop) via a handoff
def human(state) -> Command[Literal["agent", "another_agent"]]:
    """A node for collecting user input."""
    user_input = interrupt(value="Ready for user input.")

    # Determine the active agent.
    active_agent = ...

    ...
    return Command(
        update={
            "messages": [{
                "role": "human",
                "content": user_input,
            }]
        },
        goto=active_agent
    )

def agent(state) -> Command[Literal["agent", "another_agent", "human"]]:
    # The condition for routing/halting can be anything, e.g. LLM tool call / structured output, etc.
    goto = get_next_agent(...)  # 'agent' / 'another_agent'
    if goto:
        return Command(goto=goto, update={"my_state_key": "my_state_value"})
    else:
        return Command(goto="human") # Go to human node

Prebuilt implementations

LangGraph comes with prebuilt implementations of two of the most popular multi-agent architectures:
  • supervisor — individual agents are coordinated by a central supervisor agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements. You can use langgraph-supervisor library to create a supervisor multi-agent systems.
  • swarm — agents dynamically hand off control to one another based on their specializations. The system remembers which agent was last active, ensuring that on subsequent interactions, the conversation resumes with that agent. You can use langgraph-swarm library to create a swarm multi-agent systems.