langchain-nvidia-ai-endpoints
package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing,
NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
This example goes over how to use LangChain to interact with the supported NVIDIA Retrieval QA Embedding Model for retrieval-augmented generation via the NVIDIAEmbeddings
class.
For more information on accessing the chat models through this API, check out the ChatNVIDIA documentation.
Retrieval
tab, then select your model of choice.
Input
select the Python
tab, and click Get API Key
. Then click Generate Key
.
NVIDIA_API_KEY
. From there, you should have access to the endpoints.
NVIDIAEmbeddings
class.
NV-Embed-QA
below, or use the default by not passing any arguments.
Embeddings
methods including:
embed_query
: Generate query embedding for a query sample.
embed_documents
: Generate passage embeddings for a list of documents which you would like to search over.
aembed_query
/aembed_documents
: Asynchronous versions of the above.
truncate
parameter that truncates the input on the server side if it’s too large.
The truncate
parameter has three options: