For self-hosted and EU region deploymentsUpdate the LangSmith URL appropriately for self-hosted installations or organizations in the EU region in the requests below.
For the EU region, use
eu.api.smith.langchain.com.- The configuration fields needed to set up a destination.
- Required bucket permissions for AWS S3 and GCS.
- How to create a destination via the API, including provider-specific examples and credential options.
- How to rotate destination credentials without recreating the destination.
- How to debug destination errors.
Configuration fields
The following information is needed to configure a destination:- Bucket Name: The name of the S3 bucket where the data will be exported to.
- Prefix: The root prefix within the bucket where the data will be exported to.
- S3 Region: The region of the bucket—required for AWS S3 buckets.
- Endpoint URL: The endpoint URL for the S3 bucket—required for S3 API compatible buckets.
- Access Key: The access key for the S3 bucket.
- Secret Key: The secret key for the S3 bucket.
- Include Bucket in Prefix (optional): Whether to include the bucket name as part of the path prefix. Defaults to
true. Set tofalsewhen using virtual-hosted style endpoints where the bucket name is already in the endpoint URL.
Permissions required
Both thebackend and queue services require write access to the destination bucket:
- The
backendservice attempts to write a test file to the destination bucket when the export destination is created. It will delete the test file if it has permission to do so (delete access is optional). - The
queueservice is responsible for bulk export execution and uploading the files to the bucket.
AWS S3 permissions
The minimal AWS S3 permission policy relies on the following permissions:s3:PutObject(required): Allows writing Parquet files to the bucket.s3:DeleteObject(optional): Cleans up test files during destination creation. If this permission isn’t present, the file is left under the/tmpdirectory after destination creation.s3:GetObject(optional but recommended): Verifies file size after writing.s3:AbortMultipartUpload(optional but recommended): Avoids dangling multipart uploads.
Google Cloud Storage (GCS) permissions
When using GCS with the S3-compatible XML API, the following IAM permissions are required:storage.objects.create(required): Allows writing files to the bucket.storage.objects.delete(optional): Cleans up test files during destination creation. If this permission isn’t present, the file is left under the/tmpdirectory after destination creation.storage.objects.get(optional but recommended): Verifies file size after writing.
Create a destination
The following example demonstrates how to create a destination using cURL. Replace the placeholder values with your actual configuration details. Note that credentials will be stored securely in an encrypted form in our system.id to reference this destination in subsequent bulk export operations.
If you receive an error while creating a destination, see Debug destination errors for details on how to debug this.
Credentials configuration
Requires LangSmith Helm version >=
0.10.34 (application version >= 0.10.91)access_key_id and secret_access_key:
- To use temporary credentials that include an AWS session token,
additionally provide the
credentials.session_tokenkey when creating the bulk export destination. - (Self-hosted only): To use environment-based credentials such as with AWS IAM Roles for Service Accounts (IRSA),
omit the
credentialskey from the request when creating the bulk export destination. In this case, the standard Boto3 credentials locations will be checked in the order defined by the library.
AWS S3 bucket
For AWS S3, you can leave off theendpoint_url and supply the region that matches the region of your bucket.
Google GCS XML S3 compatible bucket
When using Google’s GCS bucket, you need to use the XML S3 compatible API, and supply theendpoint_url
which is typically https://storage.googleapis.com.
Here is an example of the API request when using the GCS XML API which is compatible with S3:
S3-compatible bucket with virtual-hosted style endpoint
If your endpoint URL already includes the bucket name (virtual-hosted style), setinclude_bucket_in_prefix to false to avoid duplicating the bucket name in the path:
Rotate destination credentials
UsePATCH /api/v1/bulk-exports/destinations/{destination_id} to update the credentials on an existing destination. This lets you rotate or replace credentials without recreating the destination or its associated bulk exports. The destination configuration (bucket, prefix, region, endpoint, etc.) is unchanged—only the credentials are replaced.
Credential rotation behavior
The changeover is not instantaneous:- New bulk export runs use the updated credentials immediately after the PATCH completes.
- Already running bulk export runs continue using the previous credentials until they finish.
- Both sets of credentials are active simultaneously during the transition period. This window lasts up to the maximum runtime of a single bulk export run.
Request
session_token field is optional, which you can include for temporary credentials.
Required permission: WORKSPACES_MANAGE
Before storing new credentials, LangSmith validates them by performing a test write to the bucket using the existing destination configuration. The request fails with 400 if the credentials do not have sufficient write permissions. If the request fails, refer to Debug destination errors.
Response
Returns the updated destination object. Credential values are never returned—only the credential field names are included in the response undercredentials_keys.
Rotation checklist
- Provision new credentials in your cloud provider with write access to the destination bucket and prefix.
- Call the PATCH endpoint with the new credentials. LangSmith validates them before saving.
- Keep old credentials active until all in-flight bulk export runs finish (up to the maximum run duration).
- Revoke old credentials once no runs are using them.
Debug destination errors
The destinations API endpoint will validate that the destination and credentials are valid and that write access is present for the bucket. If you receive an error, and would like to debug this error, you can use the AWS CLI to test the connectivity to the bucket. You should be able to write a file with the CLI using the same data that you supplied to the destinations API above. AWS S3:--endpoint-url option.
For GCS, the endpoint_url is typically https://storage.googleapis.com:
Common errors
Here are some common errors:| Error | Description |
|---|---|
| Access denied | The blob store credentials or bucket are not valid. This error occurs when the provided access key and secret key combination doesn’t have the necessary permissions to access the specified bucket or perform the required operations. |
| Bucket is not valid | The specified blob store bucket is not valid. This error is thrown when the bucket doesn’t exist or there is not enough access to perform writes on the bucket. |
| Key ID you provided does not exist | The blob store credentials provided are not valid. This error occurs when the access key ID used for authentication is not a valid key. |
| Invalid endpoint | The endpoint_url provided is invalid. This error is raised when the specified endpoint is an invalid endpoint. Only S3 compatible endpoints are supported, for example https://storage.googleapis.com for GCS, https://play.min.io for minio, etc. If using AWS, you should omit the endpoint_url. |
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

