This page covers how to control where LangSmith sends your traces—from static configuration to dynamic routing and multi-destination fan-out:
Set the destination project statically
As mentioned in the Tracing Concepts section, LangSmith uses the concept of a Project to group traces. If left unspecified, the project is set to default. You can set the LANGSMITH_PROJECT environment variable to configure a custom project name for an entire application run. This should be done before executing your application.
export LANGSMITH_PROJECT=my-custom-project
The LANGSMITH_PROJECT flag is only supported in JS SDK versions >= 0.2.16, use LANGCHAIN_PROJECT instead if you are using an older version.
If the project specified does not exist, it will be created automatically when the first trace is ingested.
Set the destination project dynamically
You can also set the project name at program runtime in various ways, depending on how you are annotating your code for tracing. This is useful when you want to log traces to different projects within the same application.
Setting the project name dynamically using one of the below methods overrides the project name set by the LANGSMITH_PROJECT environment variable.
import openai
from langsmith import traceable
from langsmith.run_trees import RunTree
client = openai.Client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGSMITH_TRACING environment variables is set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
project_name="My Project"
)
def call_openai(
messages: list[dict], model: str = "gpt-4.1-mini"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content
# Call the decorated function
call_openai(messages)
# You can also specify the Project via the project_name parameter
# This will override the project_name specified in the @traceable decorator
call_openai(
messages,
langsmith_extra={"project_name": "My Overridden Project"},
)
# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to LangSmith automatically.
# Ensure that the LANGSMITH_TRACING environment variables is set for the wrapper to work.
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
model="gpt-4.1-mini",
messages=messages,
langsmith_extra={"project_name": "My Project"},
)
# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
project_name="My Project"
)
chat_completion = client.chat.completions.create(
model="gpt-4.1-mini",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()
Set the destination workspace dynamically
If you need to dynamically route traces to different LangSmith workspaces based on runtime configuration (e.g., routing different users or tenants to separate workspaces), Python users can use workspace-specific LangSmith clients with tracing_context, while TypeScript users can pass a custom client to traceable or use LangChainTracer with callbacks.
This approach is useful for multi-tenant applications where you want to isolate traces by customer, environment, or team at the workspace level.
Prerequisites
Generic cross-workspace tracing
Use this approach for general applications where you want to dynamically route traces to different workspaces based on runtime logic (e.g., customer ID, tenant, or environment).
Key components:
- Initialize separate
Client instances for each workspace with their respective workspace_id.
- Use
tracing_context (Python) or pass the workspace-specific client to traceable (TypeScript) to route traces.
- Pass workspace configuration through your application’s runtime config.
import os
import contextlib
from langsmith import Client, traceable, tracing_context
# API key with access to multiple workspaces
api_key = os.getenv("LS_CROSS_WORKSPACE_KEY")
# Initialize clients for different workspaces
workspace_a_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_A_ID>" # e.g., "abc123..."
)
workspace_b_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_B_ID>" # e.g., "def456..."
)
# Example: Route based on customer ID
def get_workspace_client(customer_id: str):
"""Route to appropriate workspace based on customer."""
if customer_id.startswith("premium_"):
return workspace_a_client, "premium-customer-traces"
else:
return workspace_b_client, "standard-customer-traces"
@traceable
def process_request(data: dict, customer_id: str):
"""Process a customer request with workspace-specific tracing."""
# Your business logic here
return {"status": "success", "data": data}
# Use tracing_context to route to the appropriate workspace
def handle_customer_request(customer_id: str, request_data: dict):
client, project_name = get_workspace_client(customer_id)
# Everything within this context will be traced to the selected workspace
with tracing_context(enabled=True, client=client, project_name=project_name):
result = process_request(request_data, customer_id)
return result
# Example usage
handle_customer_request("premium_user_123", {"query": "Hello"})
handle_customer_request("standard_user_456", {"query": "Hi"})
Override default workspace for LangSmith deployments
When deploying agents to LangSmith, you can override the default workspace that traces are sent to by using a graph lifespan context manager. This is useful when you want to route traces from a deployed agent to different workspaces based on runtime configuration passed through the config parameter.
import os
import contextlib
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.state import RunnableConfig
from langsmith import Client, tracing_context
# API key with access to multiple workspaces
api_key = os.getenv("LS_CROSS_WORKSPACE_KEY")
# Initialize clients for different workspaces
workspace_a_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_A_ID>"
)
workspace_b_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_B_ID>"
)
# Define configuration schema for workspace routing
class Configuration(TypedDict):
workspace_id: str
# Define the graph state
class State(TypedDict):
response: str
def greeting(state: State, config: RunnableConfig) -> State:
"""Generate a workspace-specific greeting."""
workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a")
if workspace_id == "workspace_a":
response = "Hello from Workspace A!"
elif workspace_id == "workspace_b":
response = "Hello from Workspace B!"
else:
response = "Hello from the default workspace!"
return {"response": response}
# Build the base graph
base_graph = (
StateGraph(state_schema=State, config_schema=Configuration)
.add_node("greeting", greeting)
.set_entry_point("greeting")
.set_finish_point("greeting")
.compile()
)
@contextlib.asynccontextmanager
async def graph(config):
"""Dynamically route traces to different workspaces based on configuration."""
# Extract workspace_id from the configuration
workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a")
# Route to the appropriate workspace
if workspace_id == "workspace_a":
client = workspace_a_client
project_name = "production-traces"
elif workspace_id == "workspace_b":
client = workspace_b_client
project_name = "development-traces"
else:
client = workspace_a_client
project_name = "default-traces"
# Apply the tracing context for the selected workspace
with tracing_context(enabled=True, client=client, project_name=project_name):
yield base_graph
# Usage: Invoke with different workspace configurations
# await graph({"configurable": {"workspace_id": "workspace_a"}})
# await graph({"configurable": {"workspace_id": "workspace_b"}})
Key points
- Generic cross-workspace tracing: Use
tracing_context (Python) or pass a workspace-specific client to traceable (TypeScript) to dynamically route traces to different workspaces.
- LangGraph cross-workspace tracing: For LangGraph applications, use
LangChainTracer with the workspace-specific client and attach it via the callbacks parameter.
- LangSmith deployment override: Use a graph lifespan context manager (Python) to override the default deployment workspace based on runtime configuration.
- Each
Client instance maintains its own connection to a specific workspace via the workspaceId parameter.
- You can customize both the workspace and project name for each route.
- This pattern works with any LangSmith-compatible tracing (LangChain, OpenAI, custom functions, etc.).
When deploying with cross-workspace tracing, ensure your service key or PAT has the necessary permissions for all target workspaces. We recommend using a multi-workspace service key for production deployments. For LangSmith deployments, you must add a service key with cross-workspace access to your environment variables (e.g., LS_CROSS_WORKSPACE_KEY) to override the default service key generated by your deployment.
Write traces to multiple destinations with replicas
Replicas let you send every trace to multiple projects or workspaces at the same time. Unlike the dynamic routing patterns where each trace goes to one destination, replicas duplicate the trace to all configured destinations in parallel.
Replicas can be useful for:
- Mirror production traces into a staging or personal project for debugging.
- Write to multiple workspaces for multi-tenant isolation without changing any application code.
- Send traces to the same server under different projects, with per-replica metadata overrides.
Set the LANGSMITH_RUNS_ENDPOINTS environment variable to a JSON value. Two formats are supported:
-
Object format: maps each endpoint URL to its API key:
export LANGSMITH_RUNS_ENDPOINTS='{
"https://api.smith.langchain.com": "ls__key_workspace_a",
"https://api.smith.langchain.com": "ls__key_workspace_b"
}'
-
Array format: a list of replica objects, useful when you need multiple replicas pointing at the same URL or when you want to set a
project_name per replica:
export LANGSMITH_RUNS_ENDPOINTS='[
{"api_url": "https://api.smith.langchain.com", "api_key": "ls__key1", "project_name": "project-prod"},
{"api_url": "https://api.smith.langchain.com", "api_key": "ls__key2", "project_name": "project-staging"}
]'
You cannot use LANGSMITH_RUNS_ENDPOINTS alongside LANGSMITH_ENDPOINT. If you set both, LangSmith raises an error. Use only one to configure your endpoint.
You can also pass replicas directly in code, which is useful when destinations vary per request or tenant.
from langsmith import traceable, tracing_context
from langsmith.run_trees import WriteReplica, ApiKeyAuth
@traceable
def my_pipeline(query: str) -> str:
# Your application logic here
return f"Answer to: {query}"
replicas = [
WriteReplica(
api_url="https://api.smith.langchain.com",
auth=ApiKeyAuth(api_key="ls__key_workspace_a"),
project_name="project-prod",
),
WriteReplica(
api_url="https://api.smith.langchain.com",
auth=ApiKeyAuth(api_key="ls__key_workspace_b"),
project_name="project-staging",
# Optionally override fields on the replicated run
updates={"metadata": {"environment": "staging"}},
),
]
with tracing_context(replicas=replicas):
my_pipeline("What is LangSmith?")
You can also use the updates field to merge additional fields (such as metadata or tags) into a run for a specific replica only—the primary trace is unchanged. Replica errors are non-fatal: if a replica endpoint is unavailable, LangSmith logs the error without affecting the primary trace.
Auth does not propagate in distributed traces. When a trace spans multiple services, LangSmith forwards replica project_name and updates to downstream services automatically, but not API keys or credentials. Each service must configure its own credentials for replica destinations.
Replicate within the same server (project-only replicas)
If all your replicas use the same LangSmith server, you can omit api_url and auth and specify only a project_name. The SDK reuses the default client credentials:
from langsmith import traceable, tracing_context
from langsmith.run_trees import WriteReplica
@traceable
def my_pipeline(query: str) -> str:
return f"Answer to: {query}"
with tracing_context(
replicas=[
WriteReplica(project_name="project-prod"),
WriteReplica(project_name="project-staging", updates={"metadata": {"env": "staging"}}),
]
):
my_pipeline("What is LangSmith?")