We can also access embedding models via the Inference Providers, which let’s us use open source models on scalable serverless infrastructure.First, we need to get a read-only API key from Hugging Face.
Copy
Ask AI
from getpass import getpasshuggingfacehub_api_token = getpass()
Now we can use the HuggingFaceInferenceAPIEmbeddings class to run open source embedding models via Inference Providers.
Copy
Ask AI
from langchain_huggingface import HuggingFaceInferenceAPIEmbeddingsembeddings = HuggingFaceInferenceAPIEmbeddings( api_key=huggingfacehub_api_token, model_name="sentence-transformers/all-MiniLM-l6-v2",)query_result = embeddings.embed_query(text)query_result[:3]