You can change the destination project of your traces both statically through environment variables and dynamically at runtime.
Set the destination project statically
As mentioned in the Tracing Concepts section, LangSmith uses the concept of a Project to group traces. If left unspecified, the project is set to default. You can set the LANGSMITH_PROJECT environment variable to configure a custom project name for an entire application run. This should be done before executing your application.
export LANGSMITH_PROJECT=my-custom-project
The LANGSMITH_PROJECT flag is only supported in JS SDK versions >= 0.2.16, use LANGCHAIN_PROJECT instead if you are using an older version.
If the project specified does not exist, it will be created automatically when the first trace is ingested.
Set the destination project dynamically
You can also set the project name at program runtime in various ways, depending on how you are annotating your code for tracing. This is useful when you want to log traces to different projects within the same application.
Setting the project name dynamically using one of the below methods overrides the project name set by the LANGSMITH_PROJECT environment variable.
import openai
from langsmith import traceable
from langsmith.run_trees import RunTree
client = openai.Client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGSMITH_TRACING environment variables is set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
project_name="My Project"
)
def call_openai(
messages: list[dict], model: str = "gpt-4o-mini"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content
# Call the decorated function
call_openai(messages)
# You can also specify the Project via the project_name parameter
# This will override the project_name specified in the @traceable decorator
call_openai(
messages,
langsmith_extra={"project_name": "My Overridden Project"},
)
# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to LangSmith automatically.
# Ensure that the LANGSMITH_TRACING environment variables is set for the wrapper to work.
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
langsmith_extra={"project_name": "My Project"},
)
# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
project_name="My Project"
)
chat_completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()
Set the destination workspace dynamically
If you need to dynamically route traces to different LangSmith workspaces based on runtime configuration (e.g., routing different users or tenants to separate workspaces), Python users can use workspace-specific LangSmith clients with tracing_context, while TypeScript users can pass a custom client to traceable or use LangChainTracer with callbacks.
This approach is useful for multi-tenant applications where you want to isolate traces by customer, environment, or team at the workspace level.
Prerequisites
Generic cross-workspace tracing
Use this approach for general applications where you want to dynamically route traces to different workspaces based on runtime logic (e.g., customer ID, tenant, or environment).
Key components:
- Initialize separate
Client instances for each workspace with their respective workspace_id.
- Use
tracing_context (Python) or pass the workspace-specific client to traceable (TypeScript) to route traces.
- Pass workspace configuration through your application’s runtime config.
import os
import contextlib
from langsmith import Client, traceable, tracing_context
# API key with access to multiple workspaces
api_key = os.getenv("LS_CROSS_WORKSPACE_KEY")
# Initialize clients for different workspaces
workspace_a_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_A_ID>" # e.g., "abc123..."
)
workspace_b_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_B_ID>" # e.g., "def456..."
)
# Example: Route based on customer ID
def get_workspace_client(customer_id: str):
"""Route to appropriate workspace based on customer."""
if customer_id.startswith("premium_"):
return workspace_a_client, "premium-customer-traces"
else:
return workspace_b_client, "standard-customer-traces"
@traceable
def process_request(data: dict, customer_id: str):
"""Process a customer request with workspace-specific tracing."""
# Your business logic here
return {"status": "success", "data": data}
# Use tracing_context to route to the appropriate workspace
def handle_customer_request(customer_id: str, request_data: dict):
client, project_name = get_workspace_client(customer_id)
# Everything within this context will be traced to the selected workspace
with tracing_context(enabled=True, client=client, project_name=project_name):
result = process_request(request_data, customer_id)
return result
# Example usage
handle_customer_request("premium_user_123", {"query": "Hello"})
handle_customer_request("standard_user_456", {"query": "Hi"})
Override default workspace for LangSmith deployments
When deploying agents to LangSmith, you can override the default workspace that traces are sent to by using a graph lifespan context manager. This is useful when you want to route traces from a deployed agent to different workspaces based on runtime configuration passed through the config parameter.
import os
import contextlib
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.state import RunnableConfig
from langsmith import Client, tracing_context
# API key with access to multiple workspaces
api_key = os.getenv("LS_CROSS_WORKSPACE_KEY")
# Initialize clients for different workspaces
workspace_a_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_A_ID>"
)
workspace_b_client = Client(
api_key=api_key,
api_url="https://api.smith.langchain.com",
workspace_id="<YOUR_WORKSPACE_B_ID>"
)
# Define configuration schema for workspace routing
class Configuration(TypedDict):
workspace_id: str
# Define the graph state
class State(TypedDict):
response: str
def greeting(state: State, config: RunnableConfig) -> State:
"""Generate a workspace-specific greeting."""
workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a")
if workspace_id == "workspace_a":
response = "Hello from Workspace A!"
elif workspace_id == "workspace_b":
response = "Hello from Workspace B!"
else:
response = "Hello from the default workspace!"
return {"response": response}
# Build the base graph
base_graph = (
StateGraph(state_schema=State, config_schema=Configuration)
.add_node("greeting", greeting)
.set_entry_point("greeting")
.set_finish_point("greeting")
.compile()
)
@contextlib.asynccontextmanager
async def graph(config):
"""Dynamically route traces to different workspaces based on configuration."""
# Extract workspace_id from the configuration
workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a")
# Route to the appropriate workspace
if workspace_id == "workspace_a":
client = workspace_a_client
project_name = "production-traces"
elif workspace_id == "workspace_b":
client = workspace_b_client
project_name = "development-traces"
else:
client = workspace_a_client
project_name = "default-traces"
# Apply the tracing context for the selected workspace
with tracing_context(enabled=True, client=client, project_name=project_name):
yield base_graph
# Usage: Invoke with different workspace configurations
# await graph({"configurable": {"workspace_id": "workspace_a"}})
# await graph({"configurable": {"workspace_id": "workspace_b"}})
Key points
- Generic cross-workspace tracing: Use
tracing_context (Python) or pass a workspace-specific client to traceable (TypeScript) to dynamically route traces to different workspaces.
- LangGraph cross-workspace tracing: For LangGraph applications, use
LangChainTracer with the workspace-specific client and attach it via the callbacks parameter.
- LangSmith deployment override: Use a graph lifespan context manager (Python) to override the default deployment workspace based on runtime configuration.
- Each
Client instance maintains its own connection to a specific workspace via the workspaceId parameter.
- You can customize both the workspace and project name for each route.
- This pattern works with any LangSmith-compatible tracing (LangChain, OpenAI, custom functions, etc.).
When deploying with cross-workspace tracing, ensure your API key has the necessary permissions for all target workspaces. For LangSmith deployments, you must add an API key with cross-workspace access to your environment variables (e.g., LS_CROSS_WORKSPACE_KEY) to override the default service key generated by your deployment.