Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Private beta: The LLM Gateway is in private beta. Sign up for the waitlist to get access.
Prerequisites
You needorganization:manage permission in LangSmith. Step 3 Option A also requires a plan that includes RBAC (custom roles).
1. Enable the LLM Gateway
During private beta, the gateway must be enabled for your organization by LangChain. Once you’ve been accepted into the beta, thellm_gateway_enabled feature flag will be turned on for your organization.
At General Availability, this will be a self-service setting.
2. Add Provider Secrets
The gateway resolves provider API keys from your workspace’s Provider Secrets—this is how it proxies calls to upstream providers without individual users needing local copies of provider keys. Go to Settings → Integrations → Provider Secrets and add the keys for the providers you want to proxy through the gateway:| Secret name | Provider |
|---|---|
ANTHROPIC_API_KEY | Anthropic (Claude models) |
OPENAI_API_KEY | OpenAI (GPT models) |
GOOGLE_API_KEY | Google (Gemini models) |
3. Configure gateway access for users
The built-in rolesWORKSPACE_USER and WORKSPACE_VIEWER do not include the gateway:invoke permission and cannot be edited. You have two options for granting gateway access:
Option A: Create a custom workspace role (recommended)
Requires an RBAC-enabled plan.- Go to Settings → Members/Roles.
- Create a new workspace role.
- Grant it at minimum
gateway:invokeandworkspaces:read. - Assign users who need gateway access to this role.
Option B: Use the workspace admin role
No plan requirement. TheWORKSPACE_ADMIN role already includes both gateway:invoke and workspaces:read by default. Assign users who need gateway access to this role.
Use this if you don’t need fine-grained access control, or if you don’t have RBAC enabled.
4. Configure policies (optional)
Gateway policy management requiresorganization:manage permission.
Go to Settings → Gateway → LLM Gateway to create governance policies. You can configure:
- Spend limits: hard caps at the organization, workspace, API key, or user level. Refer to Spend policies.
- PII and secrets redaction: detect and redact sensitive data before it reaches the model. Refer to PII and secrets redaction.
5. Distribute API keys to users
Create workspace-scoped Service Keys for users who need gateway access. Each key should be attached to a role that includesgateway:invoke and workspaces:read.
Use workspace-scoped keys, not organization-scoped keys. See API key scoping for details.
Share the key and the gateway endpoint with each user, or distribute them via MDM (mobile device management) for company-wide coding agent rollouts. For per-agent configuration instructions, refer to Set up coding agents.
Verification
Ask a user to run the verification curl from the Quickstart. A200 response confirms the gateway, the API key, provider secrets, and role permissions are all configured correctly. The call will appear as a trace in the gateway tracing project in the workspace.
Next steps
- Quickstart: share with your users as the getting-started guide.
- Set up coding agents: configure Claude Code, Codex, and other agents org-wide.
- Traces, Engine, and access control: deep dive on roles, scoped keys, trace routing, and who can see what.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

