/a2a/{assistant_id}.
Supported methods
Agent Server supports the following A2A RPC methods:- message/send: Send a message to an assistant and receive a complete response
- message/stream: Send a message and stream responses in real-time using Server-Sent Events (SSE)
- tasks/get: Retrieve the status and results of a previously created task
Agent card discovery
Each assistant automatically exposes an A2A Agent Card that describes its capabilities and provides the information needed for other agents to connect. You can retrieve the agent card for any assistant using:Requirements
To use A2A, ensure you have the following dependencies installed:langgraph-api >= 0.4.21
Usage overview
To enable A2A:- Upgrade to use langgraph-api>=0.4.21.
- Deploy your agent with message-based state structure.
- Connect with other A2A-compatible agents using the endpoint.
Creating an A2A-compatible agent
This example creates an A2A-compatible agent that processes incoming messages using OpenAI’s API and maintains conversational state. The agent defines a message-based state structure and handles the A2A protocol’s message format. To be compatible with the A2A “text” parts, the agent must have amessages key in state.
The A2A protocol uses two identifiers to maintain conversational continuity:
contextId: Groups messages into a conversation thread (like a session ID)taskId: Identifies each individual request within that conversation
contextId and taskId - the agent will generate and return them. For all subsequent messages in the conversation, include the contextId and taskId from the prior response to maintain thread continuity.
LangSmith Tracing: The Langsmith Deployment A2A endpoint automatically converts the A2A contextId to thread_id for LangSmith tracing, grouping all messages in the conversation under a single thread.
For example:
Agent-to-agent communication
Once your agents are running locally vialanggraph dev or deployed to production, you can facilitate communication between them using the A2A protocol.
This example demonstrates how two agents can communicate by sending JSON-RPC messages to each other’s A2A endpoints. The script simulates a multi-turn conversation where each agent processes the other’s response and continues the dialogue.
- Two LangGraph agents communicating - Example of two LangGraph agents using the A2A protocol
- Google ADK agent with LangChain agent - Example of a Google ADK agent interacting with a LangChain agent using the A2A protocol
Distributed tracing
When multiple agents communicate over A2A, LangSmith can group all their traces into a single thread, which gives you a unified view of the entire multi-agent conversation.How contextId maps to thread_id
The Agent Server A2A endpoint automatically converts the A2AcontextId to thread_id for LangSmith tracing. This means every message in a conversation, across all participating agents, is grouped under the same thread in LangSmith without any extra configuration on your part.
The flow works as follows:
- On the first message, the client omits
contextId. The server generates one and returns it in the response. - The client passes the
contextIdin all subsequent messages to maintain conversation continuity. - Agent Server maps the
contextIdtothread_idin LangSmith metadata, so all turns appear in the same thread.
Tracing across multiple agents
When agents from different frameworks communicate over A2A, you can unify their traces in LangSmith by sharing the samethread_id across all agents. Use the contextId returned by the first agent as the thread_id for all subsequent requests.
The following code snippet demonstrates the key concepts. For a complete runnable implementation with two agents, refer to the Google ADK + LangChain example.
contextId and taskId inside the message object on follow-up turns so the server can associate them with the ongoing conversation. Omit them on the first message, because the server generates a contextId and returns it in the response.
2. Set thread_id in metadata: Pass thread_id in the top-level metadata field of the JSON-RPC payload, not inside params.
3. Share thread_id across agents: Generate a random thread_id before the first message. Once the server returns a contextId, use it as the thread_id for all subsequent requests, which keeps the A2A conversation context and the LangSmith thread in sync. Pass the same thread_id to every agent so all traces are grouped into one thread.
Receive thread_id in non-LangGraph agents
The previous section covers the client side—propagatingthread_id when sending messages. If one of your agents is not built on LangGraph, it also needs to extract and attach the thread_id on the receiving end so its traces land in the same LangSmith thread. Use langsmith.integrations.otel.configure() to set up automatic tracing, and extract the thread_id from incoming A2A request metadata to group traces in the same thread.
app after this middleware.
Set
LANGSMITH_API_KEY and optionally LANGSMITH_PROJECT in your environment to enable tracing. All agents in the conversation should use the same project so their traces are visible together.View traces in LangSmith
After running a multi-agent conversation, open the LangSmith UI and navigate to Threads. All turns from all participating agents will appear under a single thread, identified by the sharedthread_id.
Disable A2A
To disable the A2A endpoint, setdisable_a2a to true in your langgraph.json configuration file:
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

