Skip to main content
LangSmith integrations handle tracing automatically. Custom instrumentation lets you define exactly which functions are traced, control what inputs and outputs are logged, and structure your trace hierarchy without rewriting your application logic.
If you are using LangChain (either Python or JS/TS), go directly to the LangChain-specific instructions.

Prerequisites

Before tracing, set the following environment variables:
  • LANGSMITH_TRACING=true: enables tracing. Set this to toggle tracing on and off without changing your code.
  • LANGSMITH_API_KEY: your LangSmith API key.
To disable tracing, remove the LANGSMITH_TRACING environment variable. This does not affect RunTree objects or direct API usage, which are low-level and not controlled by the tracing toggle.
By default, traces are logged to a project named default. To log to a different project, see Log traces to a specific project.

Use @traceable / traceable

The recommended approach is the @traceable decorator (Python) or traceable wrapper (TypeScript). Apply it to any function to make it a traced run, and LangSmith handles context propagation across nested calls automatically. The following example traces a simple pipeline: run_pipeline calls format_prompt to build the messages, invoke_llm to call the model, and parse_output to extract the result. Each function is individually traced, and because they’re called from within run_pipeline (also traced), LangSmith automatically nests them as child runs. invoke_llm uses run_type="llm" to mark it as an LLM call so LangSmith can render token counts and latency correctly:
from langsmith import traceable
from openai import Client

openai = Client()

@traceable
def format_prompt(subject):
  return [
      {
          "role": "system",
          "content": "You are a helpful assistant.",
      },
      {
          "role": "user",
          "content": f"What's a good name for a store that sells {subject}?"
      }
  ]

@traceable(run_type="llm")
def invoke_llm(messages):
  return openai.chat.completions.create(
      messages=messages, model="gpt-4.1-mini", temperature=0
  )

@traceable
def parse_output(response):
  return response.choices[0].message.content

@traceable
def run_pipeline():
  messages = format_prompt("colorful socks")
  response = invoke_llm(messages)
  return parse_output(response)

run_pipeline()
In LangSmith, you’ll see a run_pipeline trace with format_prompt, invoke_llm, and parse_output as nested child runs.
When you wrap a sync function with traceable (e.g., formatPrompt in the previous example), use the await keyword when calling it to ensure the trace is logged correctly.

Use the trace context manager (Python only)

In Python, you can use the trace context manager to log traces to LangSmith. This is useful in situations where:
  1. You want to log traces for a specific block of code.
  2. You want control over the inputs, outputs, and other attributes of the trace.
  3. It is not feasible to use a decorator or wrapper.
  4. Any or all of the above.
The context manager integrates seamlessly with the traceable decorator and wrap_openai wrapper, so you can use them together in the same application. The following example shows all three used together. wrap_openai wraps the OpenAI client so its calls are traced automatically. my_tool uses @traceable with run_type="tool" and a custom name to appear correctly in the trace. chat_pipeline itself is not decorated—instead, ls.trace wraps the call, letting you pass the project name and inputs explicitly and set outputs manually via rt.end():
import openai
import langsmith as ls
from langsmith.wrappers import wrap_openai

client = wrap_openai(openai.Client())

@ls.traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
    return "During this morning's meeting, we solved all world conflict."

def chat_pipeline(question: str):
    context = my_tool(question)
    messages = [
        { "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
        { "role": "user", "content": f"Question: {question}\nContext: {context}"}
    ]
    chat_completion = client.chat.completions.create(
        model="gpt-4.1-mini", messages=messages
    )
    return chat_completion.choices[0].message.content

app_inputs = {"input": "Can you summarize this morning's meetings?"}

with ls.trace("Chat Pipeline", "chain", project_name="my_test", inputs=app_inputs) as rt:
    output = chat_pipeline("Can you summarize this morning's meetings?")
    rt.end(outputs={"output": output})

Use the RunTree API

Another, more explicit way to log traces to LangSmith is via the RunTree API. This API allows you more control over your tracing - you can manually create runs and children runs to assemble your trace. You still need to set your LANGSMITH_API_KEY, but LANGSMITH_TRACING is not necessary for this method. This method is not recommended, as it’s easier to make mistakes in propagating trace context.
import openai
from langsmith.run_trees import RunTree

# This can be a user input to your app
question = "Can you summarize this morning's meetings?"

# Create a top-level run
pipeline = RunTree(
  name="Chat Pipeline",
  run_type="chain",
  inputs={"question": question}
)
pipeline.post()

# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
  { "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
  { "role": "user", "content": f"Question: {question}\nContext: {context}"}
]

# Create a child run
child_llm_run = pipeline.create_child(
  name="OpenAI Call",
  run_type="llm",
  inputs={"messages": messages},
)
child_llm_run.post()

# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(
  model="gpt-4.1-mini", messages=messages
)

# End the runs and log them
child_llm_run.end(outputs=chat_completion)
child_llm_run.patch()
pipeline.end(outputs={"answer": chat_completion.choices[0].message.content})
pipeline.patch()

Example usage

You can extend the utilities explained in the previous section to trace any code. The following code shows some example extensions. Trace any public method in a class:
from typing import Any, Callable, Type, TypeVar

T = TypeVar("T")

def traceable_cls(cls: Type[T]) -> Type[T]:
    """Instrument all public methods in a class."""
    def wrap_method(name: str, method: Any) -> Any:
        if callable(method) and not name.startswith("__"):
            return traceable(name=f"{cls.__name__}.{name}")(method)
        return method

    # Handle __dict__ case
    for name in dir(cls):
        if not name.startswith("_"):
            try:
                method = getattr(cls, name)
                setattr(cls, name, wrap_method(name, method))
            except AttributeError:
                # Skip attributes that can't be set (e.g., some descriptors)
                pass

    # Handle __slots__ case
    if hasattr(cls, "__slots__"):
        for slot in cls.__slots__:  # type: ignore[attr-defined]
            if not slot.startswith("__"):
                try:
                    method = getattr(cls, slot)
                    setattr(cls, slot, wrap_method(slot, method))
                except AttributeError:
                    # Skip slots that don't have a value yet
                    pass

    return cls

@traceable_cls
class MyClass:
    def __init__(self, some_val: int):
        self.some_val = some_val

    def combine(self, other_val: int):
        return self.some_val + other_val

# See trace: https://smith.langchain.com/public/882f9ecf-5057-426a-ae98-0edf84fdcaf9/r
MyClass(13).combine(29)

Ensure all traces are submitted before exiting

LangSmith performs tracing in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. Here are some options for ensuring all traces are submitted before exiting your application.

Use the LangSmith SDK

If you are using the LangSmith SDK standalone, you can use the flush method before exit:
from langsmith import Client

client = Client()

@traceable(client=client)
async def my_traced_func():
  # Your code here...
  pass

try:
  await my_traced_func()
finally:
  await client.flush()

Use LangChain

If you are using LangChain, please refer to our LangChain tracing guide. If you prefer a video tutorial, check out the Tracing Basics video from the Introduction to LangSmith Course.