vector stores
. For detailed documentation of all MemoryVectorStore
features and configurations head to the API reference.
Overview
Integration details
Class | Package | PY support | Version |
---|---|---|---|
MemoryVectorStore | langchain | ❌ |
Setup
To use in-memory vector stores, you’ll need to install thelangchain
package:
This guide will also use OpenAI embeddings, which require you to install the @langchain/openai
integration package. You can also use other supported embeddings models if you wish.
Credentials
There are no required credentials to use in-memory vector stores. If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:Instantiation
Manage vector store
Add items to vector store
Query vector store
Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.Query directly
Performing a simple similarity search can be done as follows:true
or false
depending on whether the document should be returned.
If you want to execute a similarity search and receive the corresponding scores you can run:
Query by turning into retriever
You can also transform the vector store into a retriever for easier usage in your chains:Maximal marginal relevance
This vector store also supports maximal marginal relevance (MMR), a technique that first fetches a larger number of results (given bysearchKwargs.fetchK
), with classic similarity search, then reranks for diversity and returns the top k
results. This helps guard against redundant information:
Usage for retrieval-augmented generation
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:API reference
For detailed documentation of allMemoryVectorStore
features and configurations head to the API reference.