> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Trace without setting environment variables

As mentioned in other guides, the following environment variables allow you to configure tracing enabled, the api endpoint, the api key, and the tracing project:

* `LANGSMITH_TRACING`
* `LANGSMITH_API_KEY`
* `LANGSMITH_ENDPOINT`
* `LANGSMITH_PROJECT`

If you need to trace runs with a custom configuration, are working in an environment that doesn’t support typical environment variables (such as Cloudflare Workers), or would simply prefer not to rely on environment variables, LangSmith allows you to configure tracing programmatically.

<Warning>
  Due to a number of asks for finer-grained control of tracing using the `trace` context manager, **we changed the behavior** of `with trace` to honor the `LANGSMITH_TRACING` environment variable in version **0.1.95** of the Python SDK. You can find more details in the [release notes](https://github.com/langchain-ai/langsmith-sdk/releases/tag/v0.1.95). The recommended way to disable/enable tracing without setting environment variables is to use the `with tracing_context` context manager, as shown in the example below.
</Warning>

* Python: The recommended way to do this in Python is to use the `tracing_context` context manager. This works for both code annotated with `traceable` and code within the `trace` context manager.
* TypeScript: You can pass in both the client and the `tracingEnabled` flag to the `traceable` decorator.

<CodeGroup>
  ```python Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import openai
  from langsmith import Client, tracing_context, traceable
  from langsmith.wrappers import wrap_openai

  langsmith_client = Client(
    api_key="YOUR_LANGSMITH_API_KEY",  # This can be retrieved from a secrets manager
    api_url="https://api.smith.langchain.com",  # Update appropriately for self-hosted installations or the EU region
    workspace_id="YOUR_WORKSPACE_ID", # Must be specified for API keys scoped to multiple workspaces
  )

  client = wrap_openai(openai.Client())

  @traceable(run_type="tool", name="Retrieve Context")
  def my_tool(question: str) -> str:
    return "During this morning's meeting, we solved all world conflict."

  @traceable
  def chat_pipeline(question: str):
    context = my_tool(question)
    messages = [
        { "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
        { "role": "user", "content": f"Question: {question}\nContext: {context}"}
    ]
    chat_completion = client.chat.completions.create(
        model="gpt-5.4-mini", messages=messages
    )
    return chat_completion.choices[0].message.content

  # Can set to False to disable tracing here without changing code structure
  with tracing_context(enabled=True):
    # Use langsmith_extra to pass in a custom client
    chat_pipeline("Can you summarize this morning's meetings?", langsmith_extra={"client": langsmith_client})
  ```

  ```typescript TypeScript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { Client } from "langsmith";
  import { traceable } from "langsmith/traceable";
  import { wrapOpenAI } from "langsmith/wrappers";
  import { OpenAI } from "openai";

  const client = new Client({
      apiKey: "YOUR_API_KEY",  // This can be retrieved from a secrets manager
      apiUrl: "https://api.smith.langchain.com",  // Update appropriately for self-hosted installations or the EU region
  });

  const openai = wrapOpenAI(new OpenAI());

  const tool = traceable((question: string) => {
      return "During this morning's meeting, we solved all world conflict.";
  }, { name: "Retrieve Context", runType: "tool" });

  const pipeline = traceable(
      async (question: string) => {
          const context = await tool(question);

          const completion = await openai.chat.completions.create({
              model: "gpt-5.4-mini",
              messages: [
                  { role: "system" as const, content: "You are a helpful assistant. Please respond to the user's request only based on the given context." },
                  { role: "user" as const, content: `Question: ${question}\nContext: ${context}`}
              ]
          });

          return completion.choices[0].message.content;
      },
      { name: "Chat", client, tracingEnabled: true }
  );

  await pipeline("Can you summarize this morning's meetings?");
  ```
</CodeGroup>

If you prefer a video tutorial, check out the [Alternative Ways to Trace video](https://academy.langchain.com/pages/intro-to-langsmith-preview) from the Introduction to LangSmith Course.

## Related

If you need to dynamically enable or disable tracing based on runtime conditions (such as client requirements, data sensitivity, or compliance policies), refer to [Conditional tracing](/langsmith/conditional-tracing) for examples.

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/trace-without-env-vars.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
