If you are using LangChain (either Python or JS/TS), go directly to the LangChain-specific instructions.
Prerequisites
Before tracing, set the following environment variables:LANGSMITH_TRACING=true: enables tracing. Set this to toggle tracing on and off without changing your code.LANGSMITH_API_KEY: your LangSmith API key.
To disable tracing, remove the
LANGSMITH_TRACING environment variable. This does not affect RunTree objects or direct API usage, which are low-level and not controlled by the tracing toggle.Use @traceable / traceable
The recommended approach is the @traceable decorator (Python) or traceable wrapper (TypeScript). Apply it to any function to make it a traced run, and LangSmith handles context propagation across nested calls automatically.
The following example traces a simple pipeline: run_pipeline calls format_prompt to build the messages, invoke_llm to call the model, and parse_output to extract the result.
Each function is individually traced, and because they’re called from within run_pipeline (also traced), LangSmith automatically nests them as child runs. invoke_llm uses run_type="llm" to mark it as an LLM call so LangSmith can render token counts and latency correctly:
run_pipeline trace with format_prompt, invoke_llm, and parse_output as nested child runs.
When you wrap a sync function with
traceable (e.g., formatPrompt in the previous example), use the await keyword when calling it to ensure the trace is logged correctly.Use the trace context manager (Python only)
In Python, you can use the trace context manager to log traces to LangSmith. This is useful in situations where:
- You want to log traces for a specific block of code.
- You want control over the inputs, outputs, and other attributes of the trace.
- It is not feasible to use a decorator or wrapper.
- Any or all of the above.
traceable decorator and wrap_openai wrapper, so you can use them together in the same application.
The following example shows all three used together. wrap_openai wraps the OpenAI client so its calls are traced automatically. my_tool uses @traceable with run_type="tool" and a custom name to appear correctly in the trace. chat_pipeline itself is not decorated—instead, ls.trace wraps the call, letting you pass the project name and inputs explicitly and set outputs manually via rt.end():
Use the RunTree API
Another, more explicit way to log traces to LangSmith is via the RunTree API. This API allows you more control over your tracing - you can manually create runs and children runs to assemble your trace. You still need to set your LANGSMITH_API_KEY, but LANGSMITH_TRACING is not necessary for this method.
This method is not recommended, as it’s easier to make mistakes in propagating trace context.
Example usage
You can extend the utilities explained in the previous section to trace any code. The following code shows some example extensions. Trace any public method in a class:Ensure all traces are submitted before exiting
LangSmith performs tracing in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. Here are some options for ensuring all traces are submitted before exiting your application.Use the LangSmith SDK
If you are using the LangSmith SDK standalone, you can use theflush method before exit:
Use LangChain
If you are using LangChain, please refer to our LangChain tracing guide. If you prefer a video tutorial, check out the Tracing Basics video from the Introduction to LangSmith Course.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

