PineconeStore
features and configurations head to the API reference.
Overview
Integration details
Class | Package | PY support | Version |
---|---|---|---|
PineconeStore | @langchain/pinecone | ✅ |
Setup
To use Pinecone vector stores, you’ll need to create a Pinecone account, initialize an index, and install the@langchain/pinecone
integration package. You’ll also want to install the official Pinecone SDK to initialize a client to pass into the PineconeStore
instance.
This guide will also use OpenAI embeddings, which require you to install the @langchain/openai
integration package. You can also use other supported embeddings models if you wish.
Credentials
Sign up for a Pinecone account and create an index. Make sure the dimensions match those of the embeddings you want to use (the default is 1536 for OpenAI’stext-embedding-3-small
). Once you’ve done this set the PINECONE_INDEX
, PINECONE_API_KEY
, and (optionally) PINECONE_ENVIRONMENT
environment variables:
Instantiation
Manage vector store
Add items to vector store
Delete items from vector store
Query vector store
Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.Query directly
Performing a simple similarity search can be done as follows:Query by turning into retriever
You can also transform the vector store into a retriever for easier usage in your chains.Usage for retrieval-augmented generation
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:API reference
For detailed documentation of allPineconeStore
features and configurations head to the API reference.