langchain-azure-ai package and are exported from langchain_azure_ai.agents.middleware.
Azure AI Content Safety middleware is currently marked experimental upstream. Expect the API surface to evolve as Azure AI Content Safety and LangChain middleware support continue to mature.
Overview
| Middleware | Description |
|---|---|
| Text moderation | Screen input and output text for harmful content and blocklist matches |
| Image moderation | Screen image inputs and outputs using Azure AI Content Safety image analysis |
| Prompt shield | Detect direct and indirect prompt injection attempts |
| Protected material | Detect copyrighted or otherwise protected text or code |
| Groundedness | Evaluate model outputs against grounding sources and flag hallucinations |
Features
- Text moderation for harmful content and custom blocklists.
- Image moderation for data URLs and public HTTP(S) image inputs.
- Prompt injection detection with Prompt Shield.
- Protected material detection for text and code.
- Groundedness evaluation for generated answers against retrieved context.
- Custom
context_extractorhooks to adapt screening and evaluation to your agent state.
Setup
To use the Azure AI Content Safety middleware, install the integration package, configure either an Azure AI Foundry project endpoint or an Azure Content Safety endpoint, and provide a credential.Installation
Install the package:Credentials
For authentication, pass eitherDefaultAzureCredential() or an API-key string through the credential argument. Using a Foundry Project requires the use of Microsoft Entra ID for authentication.
Initialize credential
Instantiation
The middleware supports two endpoint styles:- An Azure Content Safety resource endpoint via
AZURE_CONTENT_SAFETY_ENDPOINT - An Azure AI Foundry project endpoint via
AZURE_AI_PROJECT_ENDPOINT
project_endpoint because it gives better defaults for Azure AI Foundry-based workflows. In most setups, you can set the environment variable once and omit endpoint or project_endpoint from each middleware instantiation.
Configure endpoint
langchain_azure_ai.agents.middleware.
Initialize middleware
Use with an agent
Pass middleware tocreate_agent in order. You can combine Azure AI middleware with built-in middleware.
Agent with middleware
Azure AI Content Safety
Text moderation
UseAzureContentModerationMiddleware to screen the last HumanMessage before the agent runs and the last AIMessage after the agent runs. This middleware uses Azure AI Content Safety harm detection and can also check custom blocklists configured in your resource.
Text moderation is useful for the following:
- Blocking harmful user input before a model call
- Screening model output before it reaches end users
- Enforcing custom blocklists in regulated or enterprise deployments
- Composing multiple moderation passes with different category and direction settings
Configuration options
Configuration options
Harm categories to analyze. Valid values are
'Hate', 'SelfHarm', 'Sexual', and 'Violence'. Defaults to all four categories.Minimum severity score from
0 to 6 that triggers the configured behavior.One of
'error', 'continue', or 'replace'.Whether to screen the last
HumanMessage before the agent runs.Whether to screen the last
AIMessage after the agent runs.Names of custom blocklists configured in your Azure Content Safety resource.
Optional callable that extracts the text to screen from agent state and runtime.
Image moderation
UseAzureContentModerationForImagesMiddleware when your agent handles visual content. It extracts images from the latest input or output message and screens them with the Azure AI Content Safety image analysis API.
This middleware supports:
- Base64 data URLs such as
data:image/png;base64,... - Public HTTP(S) image URLs
Configuration options
Configuration options
Image harm categories to analyze. Defaults to all four supported categories.
Minimum severity score from
0 to 6 that triggers the configured behavior.One of
'error' or 'continue'.Whether to screen images in the latest
HumanMessage.Whether to screen images in the latest
AIMessage.Optional callable that extracts images from agent state and runtime.
Prompt shield
UseAzurePromptShieldMiddleware to detect prompt injection in user prompts and optional supporting documents. By default it screens input only, because prompt injection is usually an input-side attack, but you can also enable output screening.
Configuration options
Configuration options
One of
'error', 'continue', or 'replace'.Whether to screen the latest
HumanMessage before the agent runs.Whether to screen the latest
AIMessage after the agent runs.Optional callable that extracts the user prompt and grounding documents from agent state and runtime.
Protected material
UseAzureProtectedMaterialMiddleware to detect protected content such as copyrighted text or code. This middleware can screen both the latest user input and the latest model output.
Configuration options
Configuration options
The content type to screen:
'text' or 'code'.One of
'error', 'continue', or 'replace'.Whether to screen the latest
HumanMessage.Whether to screen the latest
AIMessage.Optional callable that extracts text from agent state and runtime.
Groundedness
UseAzureGroundednessMiddleware to evaluate whether a model response is grounded in the context available to the agent. Unlike the other middleware classes on this page, groundedness runs after model generation and inspects the generated answer against supporting sources.
By default, groundedness collects sources from the current conversation, including system content, tool outputs, and relevant annotations attached to model responses.
Configuration options
Configuration options
The analysis domain. Supported values are
'Generic' and 'Medical'.The task type for the analysis. Supported values are
'Summarization' and 'QnA'.One of
'error' or 'continue'.Optional callable that extracts the answer, grounding sources, and optional question from agent state and runtime.
API reference
For the full public API, see the middleware exports inlangchain_azure_ai.agents.middleware and the underlying Content Safety middleware package in langchain_azure_ai.agents.middleware.content_safety.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

