Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Plan restrictions applyPlease note that the Data Export functionality is only supported for LangSmith Plus or Enterprise tiers.
- Create an export destination
- Create and configure an export job, including scheduled exports and field filtering
- Monitor export progress
1. Create a destination
The destination tells LangSmith where to write your exported data. Before making this request, you will need:- Your LangSmith API key and workspace ID.
- An S3 or S3-compatible bucket with write access granted to LangSmith (refer to Permissions required).
- The bucket name, prefix, and either the AWS region (for AWS S3) or the endpoint URL (for GCS, MinIO, or other S3-compatible providers).
- An access key and secret key for the bucket.
id from the response; you will need it when creating an export job.
Refer to Manage bulk export destinations for permissions setup, provider-specific configuration (AWS S3, GCS, MinIO), and credential options.
2. Create an export job
An export job targets a specific project and date range. You will need:- The destination
idfrom the previous step. - The project ID (
session_id)—copy this from the individual project view in the Tracing Projects list. - A
start_timeandend_timein UTC ISO 8601 format.
start_time is inclusive and end_time is exclusive. The export will include all runs where run.start_time >= start_time and run.start_time < end_time.
Save the id from the response to monitor the export’s progress.
You can optionally add a filter expression to narrow the set of runs exported. Refer to our filter query language and examples for syntax. Not setting the filter field will export all runs.
Schedule recurring exports
Requires LangSmith Helm version >=
0.10.42 (application version >= 0.10.109)interval_hours and omit end_time:
interval_hoursmust be between 1 and 168 (1 week) inclusive.end_timemust be omitted for scheduled exports; it is still required for one-time exports.- Each spawned export covers
start_timetostart_time + interval_hours, then advances byinterval_hoursfor each subsequent run. Sinceend_timeis exclusive, consecutive exports do not overlap. - Spawned exports run at
end_time + 10 minutesto account for runs submitted withend_timein the recent past. - Spawned exports have the
source_bulk_export_idattribute filled. If desired, they must be cancelled separately—cancelling the source export does not cancel already-spawned exports. - To stop a scheduled export, cancel it.
start_time=2025-07-16T00:00:00Z and interval_hours=6:
| Export | Start Time | End Time | Runs At |
|---|---|---|---|
| 1 | 2025-07-16T00:00:00Z | 2025-07-16T06:00:00Z | 2025-07-16T06:10:00Z |
| 2 | 2025-07-16T06:00:00Z | 2025-07-16T12:00:00Z | 2025-07-16T12:10:00Z |
| 3 | 2025-07-16T12:00:00Z | 2025-07-16T18:00:00Z | 2025-07-16T18:10:00Z |
Limit exported fields
Requires LangSmith Helm version >=
0.12.11 (application version >= 0.12.42). Supported in both one-time and scheduled exports.export_fields parameter. When omitted, all fields are included.
Exportable fields
By default, bulk exports include the following fields for each run: Identifiers & hierarchy:| Field | Description |
|---|---|
id | Run ID |
tenant_id | Workspace/tenant ID |
session_id | Project/session ID |
trace_id | Trace ID |
parent_run_id | Parent run ID |
parent_run_ids | List of all parent run IDs |
reference_example_id | Reference to example if part of a dataset |
| Field | Description |
|---|---|
name | Run name |
run_type | Type of run (e.g., “chain”, “llm”, “tool”) |
start_time | Start timestamp (UTC) |
end_time | End timestamp (UTC) |
status | Run status (e.g., “success”, “error”) |
is_root | Whether this is a root-level run |
dotted_order | Hierarchical ordering string |
trace_tier | Trace tier/retention level |
| Field | Description |
|---|---|
inputs | Run inputs (JSON) |
outputs | Run outputs (JSON) |
error | Error message if failed |
extra | Extra metadata (JSON) |
events | Run events (JSON) |
| Field | Description |
|---|---|
tags | List of tags |
feedback_stats | Feedback statistics (JSON). Refer to the following note for aggregation limitations. |
feedback_stats aggregation limitationThe feedback_stats field only includes value breakdowns for string-type feedback. Feedback with non-string values (numeric, boolean, complex types) is excluded from these breakdowns. To analyze non-string feedback values, export the raw feedback data separately.| Field | Description |
|---|---|
total_tokens | Total token count |
prompt_tokens | Prompt token count |
completion_tokens | Completion token count |
total_cost | Total cost |
prompt_cost | Prompt cost |
completion_cost | Completion cost |
first_token_time | Time to first token |
Partitioning scheme
Data is exported into your bucket using the following Hive partitioned structure:3. Monitor your export
Poll the export status using theid from the previous step:
status field in the response will be one of CREATED, RUNNING, COMPLETED, FAILED, CANCELLED, or TIMEDOUT. Exports may take some time depending on the volume of data. Once the status is COMPLETED, the Parquet files are available in your bucket.
Refer to Monitor and troubleshoot bulk exports for how to list runs, stop an export, and diagnose failures.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

