Pinecone’s inference API can be accessed via PineconeEmbeddings. Providing text embeddings via the Pinecone service. We start by installing prerequisite libraries:
import osfrom getpass import getpassos.environ["PINECONE_API_KEY"] = os.getenv("PINECONE_API_KEY") or getpass( "Enter your Pinecone API key: ")
Check the document for available models. Now we initialize our embedding model like so:
Copy
Ask AI
from langchain_pinecone import PineconeEmbeddingsembeddings = PineconeEmbeddings(model="multilingual-e5-large")
From here we can create embeddings either sync or async, let’s start with sync! We embed a single text as a query embedding (ie what we search with in RAG) using embed_query:
Copy
Ask AI
docs = [ "Apple is a popular fruit known for its sweetness and crisp texture.", "The tech company Apple is known for its innovative products like the iPhone.", "Many people enjoy eating apples as a healthy snack.", "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.", "An apple a day keeps the doctor away, as the saying goes.",]