> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Trace Mistral applications

[Mistral](https://mistral.ai/) provides hosted access to open-weight language models via a simple API.

This guide shows you how to trace Mistral API calls with LangSmith, allowing you to record prompts, responses, and metadata for debugging and observability. Traces are sent directly to LangSmith using the [LangSmith SDK](https://reference.langchain.com/python/langsmith/) and standard span instrumentation.

## Installation

Install Mistral’s official library and LangSmith:

<CodeGroup>
  ```bash Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  pip install mistralai langsmith
  ```

  ```bash JavaScript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  npm install @mistralai/mistralai langsmith dotenv
  ```
</CodeGroup>

[`mistralai`](https://docs.mistral.ai/getting-started/clients) provides a Mistral client for interacting with Mistral’s API.

## Setup

Set your [API keys](/langsmith/create-account-api-key) and project name:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
export MISTRAL_API_KEY="<your_mistral_api_key>"
export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="<your_langsmith_api_key>"
export LANGSMITH_PROJECT="<your_project_name>"  # optional
```

* Ensure you have a Mistral API key from your [Mistral AI account](https://v2.auth.mistral.ai/login) (set this as `MISTRAL_API_KEY`).
* Set `LANGSMITH_TRACING=true` and provide your LangSmith API key (`LANGSMITH_API_KEY`) activates automatic logging of traces.
* Specify a [`LANGSMITH_PROJECT`](/langsmith/log-traces-to-project) name to organize traces by project; if not set, traces go to the default project (named "default").
* The `LANGSMITH_TRACING` flag must be true for any traces to be recorded.

## Configure tracing

1. Instrument the Mistral API call with LangSmith. In your script, create a Mistral client and wrap a call in a traced function:

   <CodeGroup>
     ```python Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
     import os
     from mistralai import Mistral
     from langsmith import traceable

     # Initialize Mistral API client with your API key
     client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])

     @traceable(
         run_type="llm",
         metadata={"ls_provider": "mistral", "ls_model_name": "mistral-medium-latest"},
     )
     def query_mistral(prompt: str):
         response = client.chat.complete(
             model="mistral-medium-latest",
             messages=[{"role": "user", "content": prompt}],
         )
         return response.choices[0].message

     # Example usage
     result = query_mistral("Hello, how are you?")
     print("Mistral response:", result.content)
     ```

     ```typescript TypeScript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
     import { Client } from "langsmith";
     import { traceable } from "langsmith/traceable";
     import { Mistral } from "@mistralai/mistralai";
     import "dotenv/config";

     const mistral = new Mistral({
       apiKey: process.env.MISTRAL_API_KEY,
     });

     const langsmith = new Client();

     const tracedChatCompletion = traceable(
       async (params: {
         model: string;
         messages: Array<{ role: string; content: string }>;
       }) => {
         const response = await mistral.chat.complete(params);
         // Return the message content so LangSmith captures it correctly
         return response.choices[0].message.content;
       },
       {
         name: "Mistral Chat Completion",
         run_type: "llm",
         metadata: {
           ls_provider: "mistral",
           ls_model_name: "mistral-small-latest",
         },
       }
     );

     async function main() {
       const response = await tracedChatCompletion({
         model: "mistral-small-latest",
         messages: [
           { role: "user", content: "Say hello in one short sentence." },
         ],
       });

       console.log(response);
     }

     main();
     ```
   </CodeGroup>

   In this example, you use the [Mistral SDK](https://docs.mistral.ai/getting-started/clients) to send a chat completion request (with a user prompt) and retrieve the model’s answer.

   The [`@traceable`](https://reference.langchain.com/python/langsmith/run_helpers/traceable) decorator (from the [LangSmith Python SDK](https://reference.langchain.com/python/langsmith/observability/sdk/)) wraps the `query_mistral` function so that each invocation is logged as a trace run of type `"llm"`. The `metadata={"ls_provider": "mistral", "ls_model_name": "mistral-medium-latest"}` tags the trace with the provider (Mistral) and model name.

   You can also refer to the [LangSmith JavaScript SDK](https://reference.langchain.com/javascript/modules/langsmith.html).

2. Execute your script to generate a trace. For example:

   <CodeGroup>
     ```bash Python theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
     python mistral_trace.py
     ```

     ```bash JavaScript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
     node index.js
     ```
   </CodeGroup>

   The `query_mistral("Hello, how are you?")` call will reach out to the Mistral API, and because of the `@traceable`/`traceable` wrapper, LangSmith will log this call’s inputs and outputs as a new trace. You'll find the model’s response printed to the console, and a corresponding run appear in [LangSmith](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=langsmith-trace-with-mistral).

## View traces in LangSmith

After running the example, you can inspect the recorded traces in the [LangSmith UI](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=langsmith-trace-with-mistral):

1. Open the LangSmith UI and log in to your account.
2. Select the project you used for this integration (for example, the name set in `LANGSMITH_PROJECT`, or default if you didn’t set one).
3. Find the trace corresponding to your Mistral API call. It will be identified by the function name (`query_mistral`) or a custom name if provided.
4. Click on the trace to open it. You’ll be able to inspect the model input and output, including the prompt messages you sent and the response from Mistral, as well as timing information (latency) and any error details if the call failed.

With LangSmith’s tracing, you have full visibility into your Mistral calls—allowing you to debug the behavior of Mistral’s models, monitor performance (e.g., response time and token usage), and compare runs with different parameters using the metadata tags.

## Cost tracking

Although Mistral models are open-weight, using the hosted Mistral API may incur usage-based costs depending on your plan.

LangSmith can automatically associate costs with traced LLM calls by estimating token usage and applying model-specific pricing. When tracing Mistral API calls, LangSmith uses the recorded prompt and response messages to calculate token counts and attach cost information to each run.

To enable automatic cost tracking for LLM calls, refer to [Automatically track costs based on token counts](/langsmith/cost-tracking#llm-calls:-automatically-track-costs-based-on-token-counts).

Once enabled, costs appear directly in the LangSmith UI alongside each traced Mistral run, so that you can monitor usage and compare experiments over time.

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/trace-with-mistral.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
