AzureAIOpenAIApiChatModel chat models.
The AzureAIOpenAIApiChatModel class uses the OpenAI-compatible API available in Azure AI Foundry. AI Foundry has several chat models, including AzureOpenAI, Cohere, Llama, Phi-3/4, and DeepSeek-R1, among others. You can find information about their latest models and their costs, context windows, and supported input types in the Azure docs.
Overview
Integration details
| Class | Package | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|
AzureAIOpenAIApiChatModel | langchain-azure-ai | ✅ | ✅ |
Model features
| Tool calling | Structured output | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
Setup
To accessAzureAIOpenAIApiChatModel models, you’ll need to create an Azure account, get an API key, and install the langchain-azure-ai integration package.
Credentials
Head to the Azure docs to see how to create your deployment and generate an API key. Once your model is deployed, you click the ‘get endpoint’ button in AI Foundry. This will show you your endpoint and api key. Once you’ve done this, set the environment variables:Installation
The LangChainAzureAIOpenAIApiChatModel integration lives in the langchain-azure-ai package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

