Skip to main content
Mistral provides hosted access to open-weight language models via a simple API. This guide shows you how to trace Mistral API calls with LangSmith, allowing you to record prompts, responses, and metadata for debugging and observability. Traces are sent directly to LangSmith using the LangSmith SDK and standard span instrumentation.

Installation

Install Mistral’s official library and LangSmith:
pip install mistralai langsmith
mistralai provides a Mistral client for interacting with Mistral’s API.

Setup

Set your API keys and project name:
export MISTRAL_API_KEY="<your_mistral_api_key>"
export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="<your_langsmith_api_key>"
export LANGSMITH_PROJECT="<your_project_name>"  # optional
  • Ensure you have a Mistral API key from your Mistral AI account (set this as MISTRAL_API_KEY).
  • Set LANGSMITH_TRACING=true and provide your LangSmith API key (LANGSMITH_API_KEY) activates automatic logging of traces.
  • Specify a LANGSMITH_PROJECT name to organize traces by project; if not set, traces go to the default project (named “default”).
  • The LANGSMITH_TRACING flag must be true for any traces to be recorded.

Configure tracing

  1. Instrument the Mistral API call with LangSmith. In your script, create a Mistral client and wrap a call in a traced function:
    import os
    from mistralai import Mistral
    from langsmith import traceable
    
    # Initialize Mistral API client with your API key
    client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])
    
    @traceable(
        run_type="llm",
        metadata={"ls_provider": "mistral", "ls_model_name": "mistral-medium-latest"},
    )
    def query_mistral(prompt: str):
        response = client.chat.complete(
            model="mistral-medium-latest",
            messages=[{"role": "user", "content": prompt}],
        )
        return response.choices[0].message
    
    # Example usage
    result = query_mistral("Hello, how are you?")
    print("Mistral response:", result.content)
    
    In this example, you use the Mistral SDK to send a chat completion request (with a user prompt) and retrieve the model’s answer. The @traceable decorator (from the LangSmith Python SDK) wraps the query_mistral function so that each invocation is logged as a trace run of type "llm". The metadata={"ls_provider": "mistral", "ls_model_name": "mistral-medium-latest"} tags the trace with the provider (Mistral) and model name. You can also refer to the LangSmith JavaScript SDK.
  2. Execute your script to generate a trace. For example:
    python mistral_trace.py
    
    The query_mistral("Hello, how are you?") call will reach out to the Mistral API, and because of the @traceable/traceable wrapper, LangSmith will log this call’s inputs and outputs as a new trace. You’ll find the model’s response printed to the console, and a corresponding run appear in LangSmith.

View traces in LangSmith

After running the example, you can inspect the recorded traces in the LangSmith UI:
  1. Open the LangSmith UI and log in to your account.
  2. Select the project you used for this integration (for example, the name set in LANGSMITH_PROJECT, or default if you didn’t set one).
  3. Find the trace corresponding to your Mistral API call. It will be identified by the function name (query_mistral) or a custom name if provided.
  4. Click on the trace to open it. You’ll be able to inspect the model input and output, including the prompt messages you sent and the response from Mistral, as well as timing information (latency) and any error details if the call failed.
With LangSmith’s tracing, you have full visibility into your Mistral calls—allowing you to debug the behavior of Mistral’s models, monitor performance (e.g., response time and token usage), and compare runs with different parameters using the metadata tags.

Cost tracking

Although Mistral models are open-weight, using the hosted Mistral API may incur usage-based costs depending on your plan. LangSmith can automatically associate costs with traced LLM calls by estimating token usage and applying model-specific pricing. When tracing Mistral API calls, LangSmith uses the recorded prompt and response messages to calculate token counts and attach cost information to each run. To enable automatic cost tracking for LLM calls, refer to Automatically track costs based on token counts. Once enabled, costs appear directly in the LangSmith UI alongside each traced Mistral run, so that you can monitor usage and compare experiments over time.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.