Skip to main content
Building agents at scale introduces non-trivial, usage-based costs that can be difficult to track. LangSmith automatically records LLM token usage and costs for major providers, and also allows you to submit custom cost data for any additional components. This gives you a single, unified view of costs across your entire application, which makes it easy to monitor, understand, and debug your spend. This guide covers:

Viewing costs in the LangSmith UI

In the LangSmith UI, you can explore usage and spend in three main ways: first by understanding how tokens and costs are broken down, then by viewing those details within individual traces, and finally by inspecting aggregated metrics in project stats and dashboards.

Token and cost breakdowns

Token usage and costs are broken down into three categories:
  • Input: Tokens in the prompt sent to the model. Subtypes include: cache reads, text tokens, image tokens, etc
  • Output: Tokens generated in the response from the model. Subtypes include: reasoning tokens, text tokens, image tokens, etc
  • Other: Costs from tool calls, retrieval steps or any custom runs.
You can view detailed breakdowns by hovering over cost sections in the UI. When available, each section is further categorized by subtype. Cost tooltip You can inspect these breakdowns throughout the LangSmith UI, described in the following section.

Where to view token and cost breakdowns

The trace tree shows the most detailed view of token usage and cost (for a single trace). It displays the total usage for the entire trace, aggregated values for each parent run and token and cost breakdowns for each child run.Open any run inside a tracing project to view its trace tree.Cost tooltip
The project stats panel shows the total token usage and cost for all traces in a project.Cost tracking chart
Dashboards help you explore cost and token usage trends over time. The prebuilt dashboard for a tracing project shows total costs and a cost breakdown by input and output tokens.You may also configure custom cost tracking charts in custom dashboards.Cost tracking chart

Cost tracking

You can track costs in two ways:
  1. Costs for LLM calls can be automatically derived from token counts and model prices
  2. Cost for LLM calls or any other run type can be manually specified as part of the run data
The approach you use will depend on on what you’re tracking and how your model pricing is structured:
MethodRun type: LLMRun type: Other
AutomaticallyNot applicable.
ManuallyIf LLM call costs are non-linear (eg. follow a custom cost function)Send costs for any run types, e.g. tool calls, retrieval steps

LLM calls: Automatically track costs based on token counts

To compute cost automatically from token usage, you need to provide token counts, the model and provider and the model price.
Follow the instructions below if you’re using model providers whose responses don’t follow the same patterns as one of OpenAI or Anthropic.These steps are only required if you are not:
  • Calling LLMs with LangChain
  • Using @traceable to trace LLM calls to OpenAI, Anthropic or models that follow an OpenAI-compliant format
  • Using LangSmith wrappers for OpenAI or Anthropic.
1. Send token counts Many models include token counts as part of the response. You must extract this information and include it in your run using one of the following methods:
Set a usage_metadata field on the run’s metadata. The advantage of this approach is that you do not need to change your traced function’s runtime outputs
from langsmith import traceable, get_current_run_tree

inputs = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "I'd like to book a table for two."},
]

@traceable(
    run_type="llm",
    metadata={"ls_provider": "my_provider", "ls_model_name": "my_model"}
)
def chat_model(messages: list):
    # Imagine this is the real model output format your application expects
    assistant_message = {
        "role": "assistant",
        "content": "Sure, what time would you like to book the table for?"
    }

    # Token usage you compute or receive from the provider
    token_usage = {
        "input_tokens": 27,
        "output_tokens": 13,
        "total_tokens": 40,
        "input_token_details": {"cache_read": 10}
    }

    # Attach token usage to the LangSmith run
    run = get_current_run_tree()
    run.set(usage_metadata=token_usage)

    return assistant_message

chat_model(inputs)
Include the usage_metadata key directly within the object returned by your traced function. LangSmith will extract it from the output.
from langsmith import traceable

inputs = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "I'd like to book a table for two."},
]
output = {
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "Sure, what time would you like to book the table for?"
            }
        }
    ],
    "usage_metadata": {
        "input_tokens": 27,
        "output_tokens": 13,
        "total_tokens": 40,
        "input_token_details": {"cache_read": 10}
    },
}

@traceable(
    run_type="llm",
    metadata={"ls_provider": "my_provider", "ls_model_name": "my_model"}
)
def chat_model(messages: list):
    return output

chat_model(inputs)
In either case, the usage metadata should contain a subset of the following LangSmith-recognized fields:
The following fields in the usage_metadata dict are recognized by LangSmith. You can view the full Python types or TypeScript interfaces directly.
input_tokens
number
Number of tokens used in the model input. Sum of all input token types.
output_tokens
number
Number of tokens used in the model response. Sum of all output token types.
total_tokens
number
Number of tokens used in the input and output. Optional, can be inferred. Sum of input_tokens + output_tokens.
input_token_details
object
Breakdown of input token types. Keys are token-type strings, values are counts. Example {"cache_read": 5}.Known fields include: audio, text, image, cache_read, cache_creation. Additional fields are possible depending on the model or provider.
output_token_details
object
Breakdown of output token types. Keys are token-type strings, values are counts. Example {"reasoning": 5}.Known fields include: audio, text, image, reasoning. Additional fields are possible depending on the model or provider.
input_cost
number
Cost of the input tokens.
output_cost
number
Cost of the output tokens.
total_cost
number
Cost of the tokens. Optional, can be inferred. Sum of input_cost + output_cost.
input_cost_details
object
Details of the input cost. Keys are token-type strings, values are cost amounts.
output_cost_details
object
Details of the output cost. Keys are token-type strings, values are cost amounts.
Cost CalculationsThe cost for a run is computed greedily from most-to-least specific token type. Suppose you set a price of $2 per 1M input tokens with a detailed price of $1 per 1M cache_read input tokens, and $3 per 1M output tokens. If you uploaded the following usage metadata:
{
  "input_tokens": 20,
  "input_token_details": {"cache_read": 5},
  "output_tokens": 10,
  "total_tokens": 30,
}
Then, the token costs would be computed as follows:
# Notice that LangSmith computes the cache_read cost and then for any
# remaining input_tokens, the default input price is applied.
input_cost = 5 * 1e-6 + (20 - 5) * 2e-6  # 3.5e-5
output_cost = 10 * 3e-6  # 3e-5
total_cost = input_cost + output_cost  # 6.5e-5
2. Specify model name When using a custom model, the following fields need to be specified in a run’s metadata in order to associate token counts with costs. It’s also helpful to provide these metadata fields to identify the model when viewing traces and when filtering.
  • ls_provider: The provider of the model, e.g., β€œopenai”, β€œanthropic”
  • ls_model_name: The name of the model, e.g., β€œgpt-4o-mini”, β€œclaude-3-opus-20240229”
3. Set model prices A model pricing map is used to map model names to their per-token prices to compute costs from token counts. LangSmith’s model pricing table is used for this.
The table comes with pricing information for most OpenAI, Anthropic, and Gemini models. You can add prices for other models, or overwrite pricing for default models if you have custom pricing.
For models that have different pricing for different token types (e.g., multimodal or cached tokens), you can specify a breakdown of prices for each token type. Hovering over the ... next to the input/output prices shows you the price breakdown by token type. Model price map
Updates to the model pricing map are not reflected in the costs for traces already logged. We do not currently support backfilling model pricing changes.
To modify the default model prices, create a new entry with the same model, provider and match pattern as the default entry.To create a new entry in the model pricing map, click on the + Model button in the top right corner.New price map entry interfaceHere, you can specify the following fields:
  • Model Name: The human-readable name of the model.
  • Input Price: The cost per 1M input tokens for the model. This number is multiplied by the number of tokens in the prompt to calculate the prompt cost.
  • Input Price Breakdown (Optional): The breakdown of price for each different type of input token, e.g. cache_read, video, audio
  • Output Price: The cost per 1M output tokens for the model. This number is multiplied by the number of tokens in the completion to calculate the completion cost.
  • Output Price Breakdown (Optional): The breakdown of price for each different type of output token, e.g. reasoning, image, etc.
  • Model Activation Date (Optional): The date from which the pricing is applicable. Only runs after this date will apply this model price.
  • Match Pattern: A regex pattern to match the model name. This is used to match the value for ls_model_name in the run metadata.
  • Provider (Optional): The provider of the model. If specified, this is matched against ls_provider in the run metadata.
Once you have set up the model pricing map, LangSmith will automatically calculate and aggregate the token-based costs for traces based on the token counts provided in the LLM invocations.

LLM calls: Sending costs directly

If your model follows a non-linear pricing scheme, we recommend calculating costs client-side and sending them to LangSmith as usage_metadata.
Gemini 3 Pro Preview and Gemini 2.5 Pro follow a pricing scheme with a stepwise cost function. We support this pricing scheme for Gemini by default. For any other models with non-linear pricing, you will need to follow these instructions to calculate costs.
from langsmith import traceable, get_current_run_tree

inputs = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "I'd like to book a table for two."},
]

@traceable(
    run_type="llm",
    metadata={"ls_provider": "my_provider", "ls_model_name": "my_model"}
)
def chat_model(messages: list):
    llm_output = {
        "choices": [
            {
                "message": {
                    "role": "assistant",
                    "content": "Sure, what time would you like to book the table for?"
                }
            }
        ],
        "usage_metadata": {
            # Specify cost (in dollars) for the inputs and outputs
            "input_cost": 1.1e-6,
            "input_cost_details": {"cache_read": 2.3e-7},
            "output_cost": 5.0e-6,
        },
    }
    run = get_current_run_tree()
    run.set(usage_metadata=llm_output["usage_metadata"])
    return llm_output["choices"][0]["message"]

chat_model(inputs)

Other runs: Sending costs

You can also send cost information for any non-LLM runs, such as tool calls.The cost must be specified in the total_cost field under the runs usage_metadata.
Set a total_cost field on the run’s usage_metadata. The advantage of this approach is that you do not need to change your traced function’s runtime outputs
from langsmith import traceable, get_current_run_tree

# Example tool: get_weather
@traceable(run_type="tool", name="get_weather")
def get_weather(city: str):
    # Your tool logic goes here
    result = {
        "temperature_f": 68,
        "condition": "sunny",
        "city": city,
    }

    # Cost for this tool call (computed however you like)
    tool_cost = 0.0015

    # Attach usage metadata to the LangSmith run
    run = get_current_run_tree()
    run.set(usage_metadata={"total_cost": tool_cost})

    # Return only the actual tool result (no usage info)
    return result

tool_response = get_weather("San Francisco")
Include the usage_metadata key directly within the object returned by your traced function. LangSmith will extract it from the output.
from langsmith import traceable

# Example tool: get_weather
@traceable(run_type="tool", name="get_weather")
def get_weather(city: str):
    # Your tool logic goes here
    result = {
        "temperature_f": 68,
        "condition": "sunny",
        "city": city,
    }

    # Attach tool call costs here
    return {
        **result,
        "usage_metadata": {
            "total_cost": 0.0015,   # <-- cost for this tool call
        },
    }

tool_response = get_weather("San Francisco")

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.