Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Private beta: The LLM Gateway is in private beta. Sign up for the waitlist to get access.
PII detection
The gateway detects and redacts the following categories of personally identifiable information:| Category | Examples |
|---|---|
| Names | Person names in natural language |
| Nationality, religion, or political affiliation | Nationality |
| Locations | Addresses, cities, countries |
Secrets detection
The gateway detects and redacts API keys, tokens, and credentials across a wide range of providers and formats:| Category | Patterns detected |
|---|---|
| Social Security Numbers | US SSN patterns (for example, 123-45-6789) |
| Phone numbers | US phone number patterns |
| LangSmith | Personal tokens, service keys |
| AWS | Access tokens |
| GitHub | Personal access tokens, fine-grained PATs, OAuth tokens, app tokens |
| GitLab | Personal access tokens |
| AI providers | OpenAI API keys, Anthropic API keys |
| Cloud platforms | GCP API keys, Azure AD client secrets |
| Collaboration tools | Slack bot/user/app tokens, Datadog access tokens |
| Package registries | PyPI upload tokens, npm access tokens |
| Cryptographic | Private keys |
| Stripe | Access tokens |
Enable redaction policies
- Go to Settings → Gateway → LLM Gateway.
- Click Create policy.
- Select PII redaction or Secrets redaction as the policy type.
- Configure which categories to detect (or enable all).
- Save.
How redacted content appears
When PII or a secret is detected, the content is replaced with a placeholder in both the request sent to the provider and the LangSmith trace. For example: Original request:[SAFE_TO_USE:<CATEGORY>_<suffix>]:
- SAFE_TO_USE: fixed prefix marking the value as a redacted placeholder.
- <CATEGORY>: the detected type. Examples:
PERSON,LOCATION,US_SSN,US_PHONE_NUMBER,OPENAI_API_KEY,GITHUB_PAT,LANGSMITH_PERSONAL_TOKEN. - <suffix>: an 8-character random tag.
What redaction covers
What it covers:- Outbound request content (the message sent to the LLM provider) is scanned and redacted before it leaves the gateway.
- The redacted version is what appears in LangSmith traces.
- Responses from the LLM provider: if the model generates sensitive data in its response, that content is not redacted. Streaming response redaction is in progress.
- Data already in your traces: redaction only applies to requests flowing through the gateway. Traces written directly to the LangSmith API (bypassing the gateway) are not scanned.
- Platform-level ingestion: if your requirement is to prevent PII from ever entering LangSmith regardless of how it arrives (for example, data residency compliance), gateway redaction alone is not sufficient. That requires ingestion-level redaction, which is a separate capability.
- Prompt scanning: system prompts, developer prompts, and tool-call arguments are not scanned. Scanner failures are fail-close: if a PII or secrets scanner is unreachable, slow or errors, that stage blocks the request from proceeding.
Next steps
- Spend policies: add cost controls alongside data protection.
- Traces, Engine, and access control: see how redaction events appear in traces and surface in Engine.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

