> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# LangSmith CLI

> Query and manage LangSmith projects, traces, runs, datasets, evaluators, experiments, and threads from the terminal

The LangSmith CLI is a command-line tool for querying and managing your LangSmith data. It's designed for both developers and AI coding agents and outputs JSON by default for scripting, with a `--format pretty` option for human-readable tables. Use it when you need scriptable access to your LangSmith data, such as bulk exports, automation, or giving a coding agent direct access to your [traces, runs, and datasets](/langsmith/observability-concepts).

<Warning>
  The LangSmith CLI is in **alpha**. Commands, flags, and output schemas may change between releases. Report issues on [GitHub](https://github.com/langchain-ai/langsmith-cli/issues).
</Warning>

## Install

<CodeGroup>
  ```bash macOS / Linux (recommended) theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  curl -fsSL https://cli.langsmith.com/install.sh | sh
  ```

  ```powershell Windows theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  irm https://cli.langsmith.com/install.ps1 | iex
  ```

  ```bash GitHub Releases theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  # Download the latest binary for your platform:
  # https://github.com/langchain-ai/langsmith-cli/releases
  ```

  ```bash Go install theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  go install github.com/langchain-ai/langsmith-cli/cmd/langsmith@latest
  ```
</CodeGroup>

To upgrade at any time:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith self-update
```

Use the `--dry-run` flag to preview the update without installing.

## Authenticate

`langsmith auth login` requires LangSmith CLI `v0.2.30` or later. `langsmith profile` commands require LangSmith CLI `v0.2.26` or later.

The recommended local setup is to authenticate with OAuth:

<Note>
  `langsmith auth login` currently supports LangSmith Cloud (SaaS) only. For self-hosted or other non-SaaS LangSmith endpoints, authenticate with an API key or create an API-key profile.
</Note>

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith auth login
```

This opens a browser-based authorization flow and stores OAuth tokens in `~/.langsmith/config.json` under the selected [profile](/langsmith/profile-configuration). Select a profile with `--profile` or `LANGSMITH_PROFILE`:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith auth login --profile dev
langsmith --profile dev project list
```

In headless environments, pass `--no-browser` and open the printed URL manually:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith auth login --no-browser --workspace-id <workspace-id>
```

To manage saved profiles:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith profile list
langsmith profile create dev --workspace-id <workspace-id> --set-current
langsmith profile use dev
langsmith profile set-workspace <workspace-id>
```

For the full profile configuration reference, see [Profile configuration](/langsmith/profile-configuration).

You can also authenticate with an API key directly.

Set your [API key](/langsmith/create-account-api-key) as an environment variable:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
export LANGSMITH_API_KEY="lsv2_..."
```

Optionally, set a default project for queries:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
export LANGSMITH_PROJECT="my-default-project"
```

If you're using LangSmith [self-hosted](/langsmith/self-hosted) or [hybrid](/langsmith/hybrid), also set the endpoint:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
export LANGSMITH_ENDPOINT="https://your-langsmith-instance.com"
```

Or, pass them as flags per command:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith --api-key lsv2_... trace list --project my-app
```

## Quickstart

The following commands cover the core resource types:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
# List tracing projects
langsmith project list

# List recent traces in a project
langsmith trace list --project my-app --limit 5

# Get a specific trace with full detail
langsmith trace get <trace-id> --project my-app --full

# List LLM runs with token counts
langsmith run list --project my-app --run-type llm --include-metadata

# Datasets and experiments
langsmith dataset list
langsmith experiment list --dataset my-eval-set

# Conversation threads
langsmith thread list --project my-chatbot

# Sandboxes
langsmith sandbox list
langsmith sandbox tunnel my-vm --remote-port 5432
```

## Output formats

**Default**

JSON to stdout — easy to pipe, script, or feed to an agent:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith trace list --project my-app
```

**Pretty tables**

`--format pretty` for human-readable output:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith --format pretty trace list --project my-app
```

**Write to file**

`-o <path>`:

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith trace list --project my-app -o traces.json
```

## Commands

Each command group targets a specific LangSmith resource. Most commands support `--limit`, `--offset`, and a shared set of [filter flags](#filter-flags).

### List projects

Returns up to 20 projects by default, sorted by most recent activity. Lists tracing projects only. (Use [`experiment list`](#view-experiments) to list evaluation experiments.)

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith project list
langsmith project list --limit 50 --name-contains chatbot
langsmith --format pretty project list
```

### Query traces

Defaults to the last 7 days, newest first. Use `--since` or `--last-n-minutes` to change the time window.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith trace list --project my-app --limit 50 --last-n-minutes 60
langsmith trace list --project my-app --error                     # errors only
langsmith trace list --project my-app --min-latency 5             # slow traces (>5s)
langsmith trace list --project my-app --tags production           # filter by tag
langsmith trace list --project my-app --full                      # all fields
langsmith trace list --project my-app --show-hierarchy --limit 3  # include full run tree
langsmith trace get <trace-id> --project my-app --full
langsmith trace export ./traces --project my-app --limit 20 --full
```

### Query runs

Defaults to 50 results (most other commands default to 20). The same 7-day time window applies. Use `--since` or `--last-n-minutes` to override.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith run list --project my-app --run-type llm
langsmith run list --project my-app --run-type tool --name search
langsmith run list --project my-app --min-tokens 1000 --include-metadata
langsmith run get <run-id> --full
langsmith run export llm_calls.jsonl --project my-app --run-type llm --full
```

### Query threads

`--project` is required for all thread commands.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith thread list --project my-chatbot --last-n-minutes 120
langsmith thread get <thread-id> --project my-chatbot --full
```

### Manage datasets

`dataset export` exports the examples (rows) within a dataset, not the dataset metadata itself.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith dataset list
langsmith dataset list --name-contains eval
langsmith dataset get my-dataset
langsmith dataset create --name my-eval-set --description "QA pairs for v2"
langsmith dataset delete my-old-dataset --yes
langsmith dataset export my-dataset ./data.json --limit 500
langsmith dataset upload data.json --name new-dataset
```

### Manage examples

Use `--split` to assign examples to named splits (such as `test` or `train`) when creating or listing.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith example list --dataset my-dataset --limit 50
langsmith example list --dataset my-dataset --split test
langsmith example create --dataset my-dataset \
  --inputs '{"question": "What is LangSmith?"}' \
  --outputs '{"answer": "A platform for LLM observability"}' \
  --split test
langsmith example delete <example-id> --yes
```

### Manage evaluators

Evaluators can be offline (run against a dataset during experiments) or online (run against a live project). Use `--sampling-rate` to evaluate only a fraction of production runs, and `--replace` to overwrite an existing evaluator by name.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith evaluator list
langsmith evaluator upload evals.py --name accuracy \
  --function check_accuracy --dataset my-eval-set
langsmith evaluator upload evals.py --name latency-check \
  --function check_latency --project my-app --sampling-rate 0.5
langsmith evaluator upload evals.py --name accuracy \
  --function check_accuracy_v2 --dataset my-eval-set --replace --yes
langsmith evaluator delete accuracy --yes
```

### View experiments

`experiment list` shows evaluation experiments, not tracing projects. (Use [`project list`](#list-projects) to list tracing projects.)

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
langsmith experiment list
langsmith experiment list --dataset my-eval-set
langsmith experiment get my-experiment-2024-01-15
```

### Manage sandboxes

Sandbox commands let you build snapshots, create sandboxes, execute commands, open interactive consoles, and tunnel TCP ports to services running inside sandboxes.

See [Sandbox CLI](/langsmith/sandbox-cli) for the full sandbox command reference.

### Call the LangSmith API directly

The `api` command is an authenticated, scriptable wrapper around the raw LangSmith REST API — useful for endpoints the typed commands above don't cover, or for piping JSON into and out of shell scripts. It's modeled after `gh api` and `curl`: pass the path as the only positional argument, and use `-X` to set the HTTP method (defaults to `GET`). Auth headers (`x-api-key`, `x-tenant-id`) are injected automatically.

```bash theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
# GET (default method) — query string supported in the path
langsmith api sessions?limit=5

# Discover endpoints from the OpenAPI spec
langsmith api ls --tag datasets
langsmith api info GET sessions

# Typed JSON fields with -F (numbers, booleans, null, objects, arrays parsed as JSON)
# Method auto-promotes to POST when -F/-f/--input/--body is supplied
langsmith api runs/query -F session_id=abc -F limit=10

# String-typed fields with -f (always sent as a JSON string, even if numeric)
langsmith api datasets -f name=my-dataset -f description="QA pairs"

# Other HTTP methods via -X
langsmith api sessions/abc-123 -X DELETE

# Send a request body from a file or stdin
langsmith api datasets --input create-dataset.json
echo '{"name":"test"}' | langsmith api sessions --input -

# Force GET with fields — fields go to the query string instead of a body
langsmith api runs -X GET -F limit=5 -F session=abc

# Inspect response status + headers
langsmith api sessions --include

# Add custom headers
langsmith api sessions -H "Accept: text/csv"
```

Key flags:

| Flag          | Short | Default | Description                                                                               |
| ------------- | ----- | ------- | ----------------------------------------------------------------------------------------- |
| `--method`    | `-X`  | `GET`   | HTTP method                                                                               |
| `--field`     | `-F`  | —       | Typed JSON field as `key=value`. Repeatable. Use `@<path>` or `@-` for file/stdin values. |
| `--raw-field` | `-f`  | —       | String JSON field as `key=value`. Repeatable.                                             |
| `--input`     | —     | —       | File to use as the request body (`-` for stdin)                                           |
| `--body`      | —     | —       | Raw request body (JSON string, `@file`, or `@-` for stdin)                                |
| `--header`    | `-H`  | —       | Additional headers as `Key:Value`. Repeatable.                                            |
| `--include`   | `-i`  | `false` | Print response status line and headers before body                                        |

`--input` and `--body` are mutually exclusive. Subcommands `langsmith api ls` and `langsmith api info` browse and describe endpoints from the cached OpenAPI spec — pass `--refresh` to re-fetch.

## Filter flags

Most `trace` and `run` commands share these filters:

| Flag                              | Description                      | Example                          |
| --------------------------------- | -------------------------------- | -------------------------------- |
| `--project`                       | Project name                     | `--project my-app`               |
| `--limit, -n`                     | Max results                      | `-n 10`                          |
| `--offset`                        | Pagination offset                | `--offset 20`                    |
| `--last-n-minutes`                | Override the 7-day default       | `--last-n-minutes 60`            |
| `--since`                         | After ISO timestamp              | `--since 2024-01-15T00:00:00Z`   |
| `--error` / `--no-error`          | Filter by error status           | `--error`                        |
| `--name`                          | Name search (case-insensitive)   | `--name ChatOpenAI`              |
| `--run-type`                      | Run type (`llm` or `tool`)       | `--run-type llm`                 |
| `--min-latency` / `--max-latency` | Latency range in seconds         | `--min-latency 2.5`              |
| `--min-tokens`                    | Minimum total tokens             | `--min-tokens 1000`              |
| `--tags`                          | Tags, comma-separated (OR logic) | `--tags prod,v2`                 |
| `--filter`                        | Raw LangSmith filter DSL         | `--filter 'eq(status, "error")'` |
| `--trace-ids`                     | Specific trace IDs               | `--trace-ids abc123,def456`      |

**Detail flags** — control which fields are included in the response:

| Flag                 | Adds                            |
| -------------------- | ------------------------------- |
| `--include-metadata` | Status, duration, tokens, costs |
| `--include-io`       | Inputs, outputs, error          |
| `--include-feedback` | Feedback stats                  |
| `--full`             | All of the above                |
| `--show-hierarchy`   | Full run tree (traces only)     |

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/langsmith-cli.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
