> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Conditional tracing

When you have the environment variable `LANGSMITH_TRACING=true` set globally, traces are automatically sent to LangSmith. This guide shows you how to disable or customize tracing selectively for specific requests.

The [`tracing_context`](https://reference.langchain.com/python/langsmith/run_helpers/tracing_context) context manager (Python) and [`tracingEnabled`](https://reference.langchain.com/javascript/classes/langsmith.run_trees.RunTree.html#tracingenabled) option (TypeScript) allow you to override global tracing settings at runtime, without restructuring your code or changing environment variables.

Use conditional tracing when you need to:

* **Comply with data retention policies**: Some clients may require zero data retention for compliance or privacy reasons.
* **Handle sensitive operations**: Disable tracing for operations involving PII, credentials, or confidential data.
* **Implement per-tenant configurations**: Route traces to different projects or apply different settings based on the customer.
* **Control costs**: Disable tracing for low-value requests while maintaining visibility into critical operations.
* **Support feature flags**: Enable tracing only when specific features or experimental code paths are active.

<Note>
  The following sections provide language-specific examples that you can adapt to your application logic and business requirements.
</Note>

<Tabs>
  <Tab title="Python" icon="brand-python">
    ## How tracing context works

    When you use the [`tracing_context`](https://reference.langchain.com/python/langsmith/run_helpers/tracing_context) context manager, it overrides the global tracing configuration for code executed within its scope. This means you can keep automatic tracing enabled globally while selectively controlling tracing behavior for specific function calls.

    There are three priority levels of control:

    1. **`tracing_context(enabled=...)`**: highest priority (context manager for scoped tracing control).
    2. **`ls.configure(enabled=...)`**: global configuration (sets global tracing behavior).
    3. **Environment variables**: lowest priority (`LANGSMITH_TRACING`).

    ## Disable tracing for specific invocations

    To disable tracing for a specific operation, wrap it in a `tracing_context` with `enabled=False`:

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import langsmith as ls
    from langsmith import traceable

    # LANGSMITH_TRACING=true is set globally

    @traceable
    def my_function(input_text: str):
        return process(input_text)

    # Default invocation - is traced
    result = my_function("regular data")

    # Disable tracing for sensitive data
    with ls.tracing_context(enabled=False):
        result = my_function("sensitive data")  # not traced
    ```

    This pattern is useful for one-off cases where you know specific data should not be logged.

    ## Enable conditional tracing based on business logic

    You can dynamically enable or disable tracing based on runtime conditions, such as client settings or request properties.

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import langsmith as ls
    from langsmith import traceable

    @traceable
    def my_function(input_text: str):
        return process(input_text)

    def client_requires_zero_retention(client_id: str) -> bool:
        """
        Check if a client has a zero-retention policy.

        In production, this would query a database, configuration service,
        or feature flag system. Consider caching results for performance.
        """
        # Example: Query from database or config
        zero_retention_clients = get_zero_retention_clients()  # Your implementation
        return client_id in zero_retention_clients

    def handle_request(client_id: str, user_input: str):
        """
        Process a request with conditional tracing based on client requirements.
        """
        should_disable = client_requires_zero_retention(client_id)

        with ls.tracing_context(enabled=not should_disable):
            return my_function(user_input)

    # Example usage
    handle_request("client-a", "some input")  # Traced or not based on client settings
    ```

    ## Customize tracing configuration per request

    You can also customize tracing settings dynamically, such as routing traces to different projects or adding request-specific metadata.

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import langsmith as ls
    from langsmith import traceable

    @traceable
    def my_function(input_text: str):
        return process(input_text)

    def handle_request(client_id: str, user_input: str, region: str):
        """
        Route traces to client-specific projects with custom metadata.
        """
        client_tier = get_client_tier(client_id)  # e.g., "enterprise", "standard"

        with ls.tracing_context(
            enabled=True,
            project_name=f"client-{client_id}",
            tags=["production", f"tier-{client_tier}", f"region-{region}"],
            metadata={
                "client_id": client_id,
                "region": region,
                "tier": client_tier
            }
        ):
            return my_function(user_input)

    # Traces go to "client-abc" project with custom tags and metadata
    handle_request("abc", "some input", "us-west")
    ```

    This pattern is useful for:

    * **Multi-tenant applications**: Isolate traces by customer in separate projects
    * **Regional deployments**: Track performance and behavior by geographic region
    * **Feature branches**: Route experimental feature traces to dedicated projects
    * **User segmentation**: Analyze behavior by user tier, cohort, or A/B test group

    ## Work with automatic tracing

    The [`tracing_context`](https://reference.langchain.com/python/langsmith/run_helpers/tracing_context) context manager works with automatic tracing. You can keep `LANGSMITH_TRACING=true` set globally and use `tracing_context` to override settings for specific requests:

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import os
    import langsmith as ls

    # Global environment variable set
    os.environ["LANGSMITH_TRACING"] = "true"

    @ls.traceable
    def process_data(data: str):
        return data.upper()

    # Automatically traced (respects LANGSMITH_TRACING)
    process_data("hello")

    # Override global setting - disable for this call
    with ls.tracing_context(enabled=False):
        process_data("sensitive")  # not traced

    # Override global setting - enable with custom config
    with ls.tracing_context(
        enabled=True,
        project_name="special-project"
    ):
        process_data("important")  # Traced to "special-project"
    ```

    ## Nest tracing contexts

    When you nest `tracing_context` blocks, the innermost context takes precedence.

    ```python Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import langsmith as ls

    @ls.traceable
    def inner_function(data: str):
        return data

    @ls.traceable
    def outer_function(data: str):
        # This call respects the inner context
        return inner_function(data)

    # Outer context disables tracing
    with ls.tracing_context(enabled=False):
        # But inner context re-enables it
        with ls.tracing_context(enabled=True):
            outer_function("data")  # is traced
    ```

    This can be useful when you want to temporarily enable tracing for debugging within a normally non-traced section.

    ## Customize tracing in deployed agents

    Tracing is enabled by default within LangSmith Deployment's [Agent Server](/langsmith/agent-server). When using a [factory function](/langsmith/graph-rebuild), you can wrap the yielded graph with `tracing_context` to control tracing per-execution. This is useful for adding custom metadata, disabling tracing entirely, or customizing tracing based on the authenticated user.

    ### Disable tracing for a graph

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import contextlib
    import langsmith as ls
    from langgraph_sdk.runtime import ServerRuntime


    @contextlib.asynccontextmanager
    async def make_graph(runtime: ServerRuntime):
        graph = build_my_graph()

        # You can use tracing_context to dynamically enable/disable tracing,
        # set metadata or tags, override the tracing project, etc.
        with ls.tracing_context(enabled=False, metadata={"foo": "bar"}):
            yield graph
    ```

    ### Per-user tracing

    ```python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import contextlib
    import langsmith as ls
    from langgraph_sdk.runtime import ServerRuntime

    def get_project_for_user(user_id: str) -> str | None:
        ...
        return "my-project"

    graph = build_my_graph()

    @contextlib.asynccontextmanager
    async def make_graph(runtime: ServerRuntime):
        user = runtime.user
        # Route traces to a different project depending on user or disable tracing entirely
        project_name = get_project_for_user(user.identity)

        if project_name is None:
            with ls.tracing_context(enabled=False):
                yield graph
        else:
            with ls.tracing_context(
                enabled=True,
                project_name=project_name,
                metadata={"user_id": user.identity, "foo": "bar"},
            ):
                yield graph
    ```

    ## Reusable tracing wrapper

    Create a decorator to automatically apply conditional tracing logic.

    ```python Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import functools
    import langsmith as ls
    from langsmith import traceable

    def conditional_trace(check_function):
        """
        Decorator that conditionally traces based on a check function.

        Args:
            check_function: Function that returns True if tracing should be enabled
        """
        def decorator(func):
            traced_func = traceable(func)

            @functools.wraps(func)
            def wrapper(*args, **kwargs):
                should_trace = check_function(*args, **kwargs)
                with ls.tracing_context(enabled=should_trace):
                    return traced_func(*args, **kwargs)
            return wrapper
        return decorator

    # Usage
    def should_trace_client(client_id: str, *args, **kwargs) -> bool:
        return not client_requires_zero_retention(client_id)

    @conditional_trace(should_trace_client)
    def process_request(client_id: str, data: str):
        return data.upper()

    # Automatically applies conditional tracing based on client_id
    process_request("client-a", "some data")
    ```
  </Tab>

  <Tab title="TypeScript" icon="brand-javascript">
    ## How tracing enabled works

    In TypeScript, you control tracing per-function using the [`tracingEnabled`](https://reference.langchain.com/javascript/classes/langsmith.run_trees.RunTree.html#tracingenabled) parameter when calling [`traceable()`](https://reference.langchain.com/python/langsmith/run_helpers/traceable). This allows you to selectively enable or disable tracing at the function level.

    A two-level system where tracing is controlled per-function:

    1. **`tracingEnabled` parameter**: highest priority (pass to [`traceable()`](https://reference.langchain.com/python/langsmith/run_helpers/traceable) config).
    2. **Environment variables**: lowest priority (`LANGSMITH_TRACING`).

    ## Disable tracing for specific invocations

    To disable tracing for a specific operation, create a version of your traceable function with `tracingEnabled: false`:

    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import { traceable } from "langsmith/traceable";

    const myFunction = traceable(
        (inputText: string) => {
            return process(inputText);
        },
        { name: "my_function" }
    );

    // Default invocation - is traced
    await myFunction("regular data");

    // Disable tracing for sensitive data
    const myFunctionNoTrace = traceable(
        (inputText: string) => {
            return process(inputText);
        },
        { name: "my_function", tracingEnabled: false }
    );

    await myFunctionNoTrace("sensitive data");  // not traced
    ```

    This pattern is useful for one-off cases where you know specific data should not be logged.

    ## Enable conditional tracing based on business logic

    In many applications, you need to dynamically control tracing based on runtime conditions—such as client privacy requirements, regulatory compliance, or feature flags.

    In TypeScript, the most efficient approach is to create both traced and non-traced variants of your function upfront, then select between them at runtime based on your business logic. This avoids the performance overhead of creating new traced wrappers on every request while still providing fine-grained control over when tracing occurs. For example:

    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import { traceable } from "langsmith/traceable";

    // Define the core logic once
    function processText(inputText: string): string {
        // Your actual processing logic
        return inputText.toUpperCase();
    }

    // Create traced and non-traced variants upfront
    const myFunction = traceable(processText, { name: "my_function" });
    const myFunctionNoTrace = traceable(processText, {
        name: "my_function",
        tracingEnabled: false
    });

    function clientRequiresZeroRetention(clientId: string): boolean {
        /**
         * Check if a client has a zero-retention policy.
         *
         * In production, this would query a database, configuration service,
         * or feature flag system. Consider caching results for performance.
         */
        const zeroRetentionClients = getZeroRetentionClients();  // Your implementation
        return zeroRetentionClients.includes(clientId);
    }

    async function handleRequest(clientId: string, userInput: string) {
        /**
         * Process a request with conditional tracing based on client requirements.
         * Efficiently selects pre-created traced or non-traced variant.
         */
        const shouldDisable = clientRequiresZeroRetention(clientId);

        // Select the appropriate pre-created variant
        const fn = shouldDisable ? myFunctionNoTrace : myFunction;
        return await fn(userInput);
    }

    // Example usage
    await handleRequest("client-a", "some input");  // Traced or not based on client settings
    ```

    ## Work with automatic tracing

    The [`tracingEnabled`](https://reference.langchain.com/javascript/classes/langsmith.run_trees.RunTree.html#tracingenabled) option works seamlessly with automatic tracing. You can keep `LANGSMITH_TRACING=true` set globally and use `tracingEnabled` to override settings for specific functions.

    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import { traceable } from "langsmith/traceable";

    // Global tracing enabled via environment
    process.env.LANGSMITH_TRACING = "true";

    const processData = traceable(
        (data: string) => {
            return data.toUpperCase();
        },
        { name: "process_data" }
    );

    // Automatically traced (respects LANGSMITH_TRACING)
    await processData("hello");

    // Override global setting - disable for this call
    const processDataNoTrace = traceable(
        (data: string) => {
            return data.toUpperCase();
        },
        { name: "process_data", tracingEnabled: false }
    );

    await processDataNoTrace("sensitive");  // not traced

    // Override global setting - enable with custom config
    const processDataCustom = traceable(
        (data: string) => {
            return data.toUpperCase();
        },
        {
            name: "process_data",
            project_name: "special-project",
            tracingEnabled: true
        }
    );

    await processDataCustom("important");  // Traced to "special-project"
    ```
  </Tab>
</Tabs>

## Comparison with sampling

Conditional tracing and [sampling](/langsmith/sample-traces) serve different purposes:

| Feature            | Conditional tracing                               | Sampling                                     |
| ------------------ | ------------------------------------------------- | -------------------------------------------- |
| **Control**        | Deterministic (explicit enable/disable)           | Probabilistic (random sampling)              |
| **Use case**       | Business logic, compliance, per-request decisions | Cost optimization, high-volume observability |
| **Predictability** | Guaranteed behavior for specific requests         | Statistical representation of traffic        |
| **Configuration**  | Runtime code logic                                | Environment variable or client config        |

You can combine both approaches for fine-grained control.

## Related

* [Trace without environment variables](/langsmith/trace-without-env-vars): Configure tracing programmatically instead of using environment variables.
* [Set a sampling rate for traces](/langsmith/sample-traces): Probabilistically sample traces to reduce volume
* [Mask inputs and outputs](/langsmith/mask-inputs-outputs): Hide sensitive data in traces instead of disabling tracing entirely.
* [Add metadata and tags to traces](/langsmith/add-metadata-tags): Categorize and filter traces with custom attributes.

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/conditional-tracing.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
