NVIDIA features and configurations head to the API reference.
Overview
Thelangchain-nvidia-ai-endpoints package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing,
NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
This example goes over how to use LangChain to interact with NVIDIA supported via the NVIDIA class.
For more information on accessing the llm models through this api, check out the NVIDIA documentation.
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| NVIDIA | langchain-nvidia-ai-endpoints | ✅ | beta | ❌ |
Model features
| Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|
| ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ |
Setup
To get started:- Create a free account with NVIDIA, which hosts NVIDIA AI Foundation models.
- Click on your model of choice.
-
Under
Inputselect thePythontab, and clickGet API Key. Then clickGenerate Key. -
Copy and save the generated key as
NVIDIA_API_KEY. From there, you should have access to the endpoints.
Credentials
Installation
The LangChain NVIDIA AI Endpoints integration lives in thelangchain-nvidia-ai-endpoints package:
Instantiation
See LLM for full functionality.Invocation
Stream, batch, and async
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.Supported models
Queryingavailable_models will still give you all of the other models offered by your API credentials.
API reference
For detailed documentation of allNVIDIA features and configurations head to the API reference: python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.llms.NVIDIA.html
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.