> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Use environment variables for model providers

<Note>
  This feature is only available on Helm chart versions 0.10.27 (application version 0.10.74) and later.
</Note>

Many model providers support setting credentials and other configuration options through environment variables. This is useful for self-hosted deployments where you want to avoid hardcoding sensitive information in your code or configuration files. In LangSmith, most model interactions are done through the `playground` service, which allows you to configure many of those environment variables directly on the pod itself. This can be useful to avoid having to set credentials in the UI.

## Requirements

* A self-hosted LangSmith instance with the `playground` service running.
* The provider you want to configure must support environment variables for configuration. Check the provider's Chat Model [documentation](https://docs.langchain.com/oss/python/integrations/providers/overview) for more information.
* The secrets/roles you may want to attach to the `playground` service.
  * Note that for [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) you may need to grant the `langsmith-playground` service account the necessary permissions to access the secrets or roles in your cloud provider.

## Configuration

With the parameters from above, you can configure your LangSmith instance to use environment variables for model providers. You can do this by modifying the `langsmith_config.yaml` file for your LangSmith Helm Chart installation or the `docker-compose.yaml` file for your Docker installation.

<CodeGroup>
  ```yaml Helm theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  playground:
    deployment:
      extraEnv:
        - name: OPENAI_BASE_URL
          value: https://<my_proxy_url>
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: <your_secret_name>
              key: api_key
    serviceAccount: # Can be useful if you want to use IRSA or workload identity
      annotations:
        eks.amazonaws.com/role-arn: <your_role_arn>
  ```

  ```yaml Docker theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  # In your docker-compose.yaml file
  langchain-playground:
    environment:
      .. # Other environment variables
      - OPENAI_BASE_URL=https://<my_proxy_url>
      - OPENAI_API_KEY=<your_key> # This will be set in the .env file
  ```
</CodeGroup>

## VertexAI configuration

You can configure VertexAI credentials for the playground service using either environment variables with secrets or workload identity (GCP Workload Identity for GKE or AWS IRSA for EKS).

### Using secrets

Configure VertexAI credentials using Kubernetes secrets:

<CodeGroup>
  ```yaml Helm theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  playground:
    deployment:
      extraEnv:
        # Playground-specific secret (recommended)
        - name: GOOGLE_VERTEX_AI_WEB_CREDENTIALS
          valueFrom:
            secretKeyRef:
              name: gcp-vertexai-secret
              key: credentials_json  # Your full service account JSON as string
        # Standard fallback option
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /secrets/gcp-key.json
        # Optional: Set project/location if not in model config
        - name: GOOGLE_CLOUD_PROJECT
          value: "your-gcp-project-id"
        - name: VERTEXAI_PROJECT_ID
          value: "your-gcp-project-id"
        - name: VERTEXAI_LOCATION
          value: "us-central1"
      extraVolumeMounts:
        - name: gcp-secret-volume
          mountPath: /secrets
          readOnly: true
      extraVolumes:
        - name: gcp-secret-volume
          secret:
            secretName: gcp-key-json  # JSON file secret
            defaultMode: 0444
  ```

  ```yaml Docker theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  # In your docker-compose.yaml file
  langchain-playground:
    environment:
      .. # Other environment variables
      - GOOGLE_VERTEX_AI_WEB_CREDENTIALS=<your_service_account_json>  # Full JSON as string
      # Or use file path
      - GOOGLE_APPLICATION_CREDENTIALS=/secrets/gcp-key.json
      - GOOGLE_CLOUD_PROJECT=your-gcp-project-id
      - VERTEXAI_PROJECT_ID=your-gcp-project-id
      - VERTEXAI_LOCATION=us-central1
    volumes:
      - ./gcp-key.json:/secrets/gcp-key.json:ro
  ```
</CodeGroup>

### Using workload identity

You can configure the playground service account to use workload identity to assume a GCP service account role without storing credentials. This is the recommended approach for GKE clusters.

#### GCP Workload Identity (GKE)

For GKE clusters, use GCP Workload Identity:

<CodeGroup>
  ```yaml Helm theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  playground:
    deployment:
      extraEnv:
        # Optional: Set project/location if not in model config
        - name: GOOGLE_CLOUD_PROJECT
          value: "your-gcp-project-id"
        - name: VERTEXAI_PROJECT_ID
          value: "your-gcp-project-id"
        - name: VERTEXAI_LOCATION
          value: "us-central1"
      # No credentials needed - pod assumes GCP SA role via annotation
    serviceAccount:
      create: true  # Enable if not exists
      annotations:
        iam.gke.io/gcp-service-account: "vertexai-sa@your-gcp-project.iam.gserviceaccount.com"
  ```
</CodeGroup>

<Note>
  When using GCP Workload Identity, ensure the GCP service account has the required VertexAI permissions (e.g., `roles/aiplatform.user`).
</Note>

#### AWS IRSA (EKS)

For EKS clusters, you can use AWS IRSA to assume a GCP service account role:

<CodeGroup>
  ```yaml Helm theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  playground:
    deployment:
      extraEnv:
        # Optional: Set project/location if not in model config
        - name: GOOGLE_CLOUD_PROJECT
          value: "your-gcp-project-id"
        - name: VERTEXAI_PROJECT_ID
          value: "your-gcp-project-id"
        - name: VERTEXAI_LOCATION
          value: "us-central1"
      # No credentials needed - pod assumes GCP SA role via AWS IAM role
    serviceAccount:
      create: true  # Enable if not exists
      annotations:
        eks.amazonaws.com/role-arn: arn:aws:iam::<account>:role/LangSmith-VertexAI-Role
  ```
</CodeGroup>

<Note>
  When using AWS IRSA, ensure your AWS IAM role has the necessary permissions to assume the GCP service account role, and that the GCP service account has the required VertexAI permissions.
</Note>

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/self-host-playground-environment-settings.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
