Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

Private beta: The LLM Gateway is in private beta. Sign up for the waitlist to get access.
One-time setup to enable the LLM Gateway for your LangSmith organization. Organization admins should complete this before individual users can route calls through the gateway.

Prerequisites

You need organization:manage permission in LangSmith. Step 3 Option A also requires a plan that includes RBAC (custom roles).

1. Enable the LLM Gateway

During private beta, the gateway must be enabled for your organization by LangChain. Once you’ve been accepted into the beta, the llm_gateway_enabled feature flag will be turned on for your organization. At General Availability, this will be a self-service setting.

2. Add Provider Secrets

The gateway resolves provider API keys from your workspace’s Provider Secrets—this is how it proxies calls to upstream providers without individual users needing local copies of provider keys. Go to Settings → Integrations → Provider Secrets and add the keys for the providers you want to proxy through the gateway:
Secret nameProvider
ANTHROPIC_API_KEYAnthropic (Claude models)
OPENAI_API_KEYOpenAI (GPT models)
GOOGLE_API_KEYGoogle (Gemini models)
Add only the providers your organization uses. The gateway will return an error if a user tries to call a provider whose key hasn’t been added.

3. Configure gateway access for users

The built-in roles WORKSPACE_USER and WORKSPACE_VIEWER do not include the gateway:invoke permission and cannot be edited. You have two options for granting gateway access: Requires an RBAC-enabled plan.
  1. Go to Settings → Members/Roles.
  2. Create a new workspace role.
  3. Grant it at minimum gateway:invoke and workspaces:read.
  4. Assign users who need gateway access to this role.
Use this when you want to grant gateway access to specific users without giving them full workspace-admin privileges. This gives you the most control over who can use the gateway.

Option B: Use the workspace admin role

No plan requirement. The WORKSPACE_ADMIN role already includes both gateway:invoke and workspaces:read by default. Assign users who need gateway access to this role. Use this if you don’t need fine-grained access control, or if you don’t have RBAC enabled.

4. Configure policies (optional)

Gateway policy management requires organization:manage permission. Go to Settings → Gateway → LLM Gateway to create governance policies. You can configure:
  • Spend limits: hard caps at the organization, workspace, API key, or user level. Refer to Spend policies.
  • PII and secrets redaction: detect and redact sensitive data before it reaches the model. Refer to PII and secrets redaction.
Policies are optional during initial setup. The gateway will freely allow invocations until you have configured policies.

5. Distribute API keys to users

Create workspace-scoped Service Keys for users who need gateway access. Each key should be attached to a role that includes gateway:invoke and workspaces:read. Use workspace-scoped keys, not organization-scoped keys. See API key scoping for details. Share the key and the gateway endpoint with each user, or distribute them via MDM (mobile device management) for company-wide coding agent rollouts. For per-agent configuration instructions, refer to Set up coding agents.

Verification

Ask a user to run the verification curl from the Quickstart. A 200 response confirms the gateway, the API key, provider secrets, and role permissions are all configured correctly. The call will appear as a trace in the gateway tracing project in the workspace.

Next steps