Skip to main content
For self-hosted and EU region deploymentsUpdate the LangSmith URL appropriately for self-hosted installations or organizations in the EU region in the requests below. For the EU region, use eu.api.smith.langchain.com.
A destination is a named configuration that tells LangSmith where to write exported trace data. You create a destination once, then reference it by ID when creating export jobs. LangSmith currently supports S3 and any S3-compatible bucket (such as GCS or MinIO) as a destination. Exported data is written in Parquet columnar format and contains equivalent fields to the Run data format. This page covers:

Configuration fields

The following information is needed to configure a destination:
  • Bucket Name: The name of the S3 bucket where the data will be exported to.
  • Prefix: The root prefix within the bucket where the data will be exported to.
  • S3 Region: The region of the bucket—required for AWS S3 buckets.
  • Endpoint URL: The endpoint URL for the S3 bucket—required for S3 API compatible buckets.
  • Access Key: The access key for the S3 bucket.
  • Secret Key: The secret key for the S3 bucket.
  • Include Bucket in Prefix (optional): Whether to include the bucket name as part of the path prefix. Defaults to true. Set to false when using virtual-hosted style endpoints where the bucket name is already in the endpoint URL.
We support any S3 compatible bucket. For non-AWS buckets such as GCS or MinIO, you will need to provide the endpoint URL.

Permissions required

Both the backend and queue services require write access to the destination bucket:
  • The backend service attempts to write a test file to the destination bucket when the export destination is created. It will delete the test file if it has permission to do so (delete access is optional).
  • The queue service is responsible for bulk export execution and uploading the files to the bucket.

AWS S3 permissions

The minimal AWS S3 permission policy relies on the following permissions:
  • s3:PutObject (required): Allows writing Parquet files to the bucket.
  • s3:DeleteObject (optional): Cleans up test files during destination creation. If this permission isn’t present, the file is left under the /tmp directory after destination creation.
  • s3:GetObject (optional but recommended): Verifies file size after writing.
  • s3:AbortMultipartUpload (optional but recommended): Avoids dangling multipart uploads.
Minimal IAM policy example:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_BUCKET_NAME/*"
      ]
    }
  ]
}
Recommended IAM policy example with additional permissions:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_BUCKET_NAME/*"
      ]
    }
  ]
}

Google Cloud Storage (GCS) permissions

When using GCS with the S3-compatible XML API, the following IAM permissions are required:
  • storage.objects.create (required): Allows writing files to the bucket.
  • storage.objects.delete (optional): Cleans up test files during destination creation. If this permission isn’t present, the file is left under the /tmp directory after destination creation.
  • storage.objects.get (optional but recommended): Verifies file size after writing.
These permissions can be granted through the “Storage Object Admin” predefined role or a custom role.

Create a destination

The following example demonstrates how to create a destination using cURL. Replace the placeholder values with your actual configuration details. Note that credentials will be stored securely in an encrypted form in our system.
curl --request POST \
  --url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: YOUR_API_KEY' \
  --header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
  --data '{
    "destination_type": "s3",
    "display_name": "My S3 Destination",
    "config": {
      "bucket_name": "your-s3-bucket-name",
      "prefix": "root_folder_prefix",
      "region": "your aws s3 region",
      "endpoint_url": "your endpoint url for s3 compatible buckets",
      "include_bucket_in_prefix": true // defaults to true, can be omitted
    },
    "credentials": {
      "access_key_id": "YOUR_S3_ACCESS_KEY_ID",
      "secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
    }
  }'
Use the returned id to reference this destination in subsequent bulk export operations. If you receive an error while creating a destination, see Debug destination errors for details on how to debug this.

Credentials configuration

Requires LangSmith Helm version >= 0.10.34 (application version >= 0.10.91)
We support the following additional credentials formats besides static access_key_id and secret_access_key:
  • To use temporary credentials that include an AWS session token, additionally provide the credentials.session_token key when creating the bulk export destination.
  • (Self-hosted only): To use environment-based credentials such as with AWS IAM Roles for Service Accounts (IRSA), omit the credentials key from the request when creating the bulk export destination. In this case, the standard Boto3 credentials locations will be checked in the order defined by the library.

AWS S3 bucket

For AWS S3, you can leave off the endpoint_url and supply the region that matches the region of your bucket.
curl --request POST \
  --url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: YOUR_API_KEY' \
  --header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
  --data '{
    "destination_type": "s3",
    "display_name": "My AWS S3 Destination",
    "config": {
      "bucket_name": "my_bucket",
      "prefix": "data_exports",
      "region": "us-east-1"
    },
    "credentials": {
      "access_key_id": "YOUR_S3_ACCESS_KEY_ID",
      "secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
    }
  }'

Google GCS XML S3 compatible bucket

When using Google’s GCS bucket, you need to use the XML S3 compatible API, and supply the endpoint_url which is typically https://storage.googleapis.com. Here is an example of the API request when using the GCS XML API which is compatible with S3:
curl --request POST \
  --url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: YOUR_API_KEY' \
  --header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
  --data '{
    "destination_type": "s3",
    "display_name": "My GCS Destination",
    "config": {
      "bucket_name": "my_bucket",
      "prefix": "data_exports",
      "endpoint_url": "https://storage.googleapis.com"
      "include_bucket_in_prefix": true // defaults to true, can be omitted
    },
    "credentials": {
      "access_key_id": "YOUR_S3_ACCESS_KEY_ID",
      "secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
    }
  }'
See Google documentation for more info

S3-compatible bucket with virtual-hosted style endpoint

If your endpoint URL already includes the bucket name (virtual-hosted style), set include_bucket_in_prefix to false to avoid duplicating the bucket name in the path:
curl --request POST \
  --url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: YOUR_API_KEY' \
  --header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
  --data '{
    "destination_type": "s3",
    "display_name": "My Virtual-Hosted Destination",
    "config": {
      "bucket_name": "my_bucket",
      "prefix": "data_exports",
      "endpoint_url": "https://my_bucket.s3.us-east-1.amazonaws.com",
      "include_bucket_in_prefix": false
    },
    "credentials": {
      "access_key_id": "YOUR_S3_ACCESS_KEY_ID",
      "secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
    }
  }'

Debug destination errors

The destinations API endpoint will validate that the destination and credentials are valid and that write access is present for the bucket. If you receive an error, and would like to debug this error, you can use the AWS CLI to test the connectivity to the bucket. You should be able to write a file with the CLI using the same data that you supplied to the destinations API above. AWS S3:
aws configure

# set the same access key credentials and region as you used for the destination
> AWS Access Key ID: <access_key_id>
> AWS Secret Access Key: <secret_access_key>
> Default region name [us-east-1]: <region>

# List buckets
aws s3 ls /

# test write permissions
touch ./test.txt
aws s3 cp ./test.txt s3://<bucket-name>/tmp/test.txt
GCS Compatible Buckets: You will need to supply the endpoint_url with --endpoint-url option. For GCS, the endpoint_url is typically https://storage.googleapis.com:
aws configure

# set the same access key credentials and region as you used for the destination
> AWS Access Key ID: <access_key_id>
> AWS Secret Access Key: <secret_access_key>
> Default region name [us-east-1]: <region>

# List buckets
aws s3 --endpoint-url=<endpoint_url> ls /

# test write permissions
touch ./test.txt
aws s3 --endpoint-url=<endpoint_url> cp ./test.txt s3://<bucket-name>/tmp/test.txt

Common errors

Here are some common errors:
ErrorDescription
Access deniedThe blob store credentials or bucket are not valid. This error occurs when the provided access key and secret key combination doesn’t have the necessary permissions to access the specified bucket or perform the required operations.
Bucket is not validThe specified blob store bucket is not valid. This error is thrown when the bucket doesn’t exist or there is not enough access to perform writes on the bucket.
Key ID you provided does not existThe blob store credentials provided are not valid. This error occurs when the access key ID used for authentication is not a valid key.
Invalid endpointThe endpoint_url provided is invalid. This error is raised when the specified endpoint is an invalid endpoint. Only S3 compatible endpoints are supported, for example https://storage.googleapis.com for GCS, https://play.min.io for minio, etc. If using AWS, you should omit the endpoint_url.