Skip to main content
When you call a deployed Agent Server from another service, you can propagate trace context so that the entire request appears as a single unified trace in LangSmith. This uses LangSmith’s distributed tracing capabilities, which propagate context via HTTP headers.

How it works

Distributed tracing links runs across services using context propagation headers:
  1. The client infers the trace context from the current run and sends it as HTTP headers.
  2. The server reads the headers and adds them to the run’s config and metadata as langsmith-trace and langsmith-project configurable values. You can choose to use these to set the tracing context for a given run when your agent is used.
The headers used are:
  • langsmith-trace: Contains the trace’s dotted order.
  • baggage: Specifies the LangSmith project and other optional tags and metadata.
To opt-in to distributed tracing, both client and server need to opt in.

Configure the server

To accept distributed trace context, your graph must read the trace headers from the config and set the tracing context. The headers are passed through the configurable field as langsmith-trace and langsmith-project.
import contextlib
import langsmith as ls
from langgraph.graph import StateGraph, MessagesState

# Define your graph
builder = StateGraph(MessagesState)
# ... add nodes and edges ...
my_graph = builder.compile()

@contextlib.contextmanager
async def graph(config):
    configurable = config.get("configurable", {})
    parent_trace = configurable.get("langsmith-trace")
    parent_project = configurable.get("langsmith-project")
    # If you want to also include metadata and tags from the client
    metadata = configurable.get("langsmith-metadata")
    tags = configurable.get("langsmith-tags")
    with ls.tracing_context(parent=parent_trace, project_name=parent_project, metadata=metadata, tags=tags):
        yield my_graph
Export this graph function in your langgraph.json:
{
  "graphs": {
    "agent": "./src/agent.py:graph"
  }
}

Connect from the client

Set distributed_tracing=True when initializing RemoteGraph. This automatically propagates trace headers on all requests.
from langgraph.graph import StateGraph
from langgraph.pregel.remote import RemoteGraph

remote_graph = RemoteGraph(
    "agent",
    url="<DEPLOYMENT_URL>",
    distributed_tracing=True,  # Enable trace propagation
)

def subgraph_node(query: str):
    # Trace context is automatically propagated
    return remote_graph.invoke({
        "messages": [{"role": "user", "content": query}]
    })['messages'][-1]['content']

# The RemoteGraph is called in the context of some on going work.
# This could be a parent LangGraph agent, code traced with `@ls.traceable`,
# or any other instrumented code.
graph = (
        StateGraph(str)
            .add_node(subgraph_node)
            .add_edge("__start__", "subgraph_node")
            .compile()
)
# The remote graph's execution will appear as a child of this trace
result = graph.invoke("What's the weather in SF?")

Connect these docs to Claude, VSCode, and more via MCP for real-time answers.