- Monitoring export status and listing runs for a specific export.
- Listing all exports in your workspace.
- Stopping an export.
- Failure modes and retry policy, including automatic retry behavior, failure scenarios, status lifecycle, concurrency limits, and progress tracking.
- Troubleshooting failed exports.
For self-hosted and EU region deploymentsUpdate the LangSmith URL appropriately for self-hosted installations or organizations in the EU region in the requests below.
For the EU region, use
eu.api.smith.langchain.com.Monitor export status
To monitor the status of an export job, use the following cURL command:{export_id} with the ID of the export you want to monitor. This command retrieves the current status of the specified export job.
List runs for an export
An export is typically broken up into multiple runs which correspond to a specific date partition to export. To list all runs associated with a specific export, use the following cURL command:List all exports
To retrieve a list of all export jobs, use the following cURL command:Stop an export
To stop an existing export, use the following cURL command:{export_id} with the ID of the export you wish to cancel. Note that a job cannot be restarted once it has been cancelled,
you will need to create a new export job instead.
Failure modes and retry policy
LangSmith bulk exports handle transient failures and infrastructure issues automatically to ensure resilience. Each bulk export is divided into multiple runs, where each run processes data for a specific date partition (typically organized by day). Runs are processed independently, which enables:- Parallel processing of different time periods.
- Independent retry logic for each run.
- Resumption from specific checkpoints if interrupted.
FAILED.
Automatic retry behavior
Export jobs automatically retry transient failures with the following behavior:- Maximum retry attempts: 20 retries per run (subject to change).
- Retry delay: 30 seconds between attempts (fixed, no exponential backoff).
- Run timeout: 4 hours maximum per run.
- Overall workflow timeout: 72 hours for the entire export.
Failure scenarios
| Failure type | Cause | Automatic retry? | Action required |
|---|---|---|---|
| Infrastructure interruption | Deployments, server restarts, worker crashes | Yes, automatically requeued with remaining retries. | None, jobs resume automatically. |
| Run timeout | Single run exceeds 4-hour limit | Yes, retried up to 20 times (subject to change). | If persistent, narrow date range, add filters, or limit the exported fields. |
| Workflow timeout | Entire export exceeds 72 hours | No | Reduce export scope (date range, filters) or break into smaller exports. |
| Storage/destination errors | Invalid credentials, missing bucket, permission issues | No | Fix destination configuration and create new export. |
| Destination deleted | Bucket removed during export | No | Recreate destination and restart export. |
| Terminal processing errors | Data serialization issues, resource exhaustion | Yes, retried up to 20 times (subject to change). | Check run error details; may require investigation. |
Any single run failure (after all retries are exhausted) causes the entire export to fail.
Export status lifecycle
Exports can have the following statuses:| Status | Description |
|---|---|
CREATED | Export has been created but not yet started processing. |
RUNNING | Export is actively processing runs. |
COMPLETED | All runs successfully exported. |
FAILED | One or more runs failed after exhausting retries. |
CANCELLED | Export was manually cancelled by user. |
TIMEDOUT | Export exceeded the 48-hour workflow timeout. |
CREATED, RUNNING, COMPLETED, FAILED, CANCELLED, or TIMEDOUT.
Concurrency and rate limits
To ensure system stability, exports are subject to the following limits:- Maximum concurrent runs per export: 45
- Maximum concurrent exports per workspace: 15
Progress tracking and resumability
The export system maintains detailed progress metadata for each run:- Latest cursor position in the data stream.
- Number of rows exported.
- List of Parquet files written.
- Graceful resumption: If a run is interrupted (e.g., by a deployment), it resumes from the last checkpoint rather than starting over.
- Progress monitoring: Track how much data has been exported through the API.
- Efficient retries: Failed runs don’t re-export data that was already successfully written.
Troubleshooting failed exports
If your export fails, follow these steps:- Check the export status: Use the
GET /api/v1/bulk-exports/{export_id}endpoint to retrieve the export details and status. - Review run errors: You can monitor your runs using the List Runs API. Each run includes an
errorsfield with detailed error messages keyed by retry attempt (e.g.,retry_0,retry_1). - Verify destination access: Ensure your destination bucket still exists and credentials are valid.
- Check run size: If you see timeout errors, your date partitions may contain too much data. It may be helpful to limit the exported fields.
- Review system limits: Ensure you’re not hitting concurrency limits (5 runs per export, 3 exports per workspace).
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

