Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.This documentation demonstrates how to use Qdrant with LangChain for dense (i.e., embedding-based), sparse (i.e., text search) and hybrid retrieval. The
QdrantVectorStore
class supports multiple retrieval modes via Qdrant’s new Query API. It requires you to run Qdrant v1.10.0 or above.
Setup
There are various modes of how to runQdrant
, and depending on the chosen one, there will be some subtle differences. The options include:
- Local mode, no server required
- Docker deployments
- Qdrant Cloud
Credentials
There are no credentials needed to run the code in this notebook. If you want to get best in-class automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:Initialization
Local mode
The Python client provides the option to run the code in local mode without running the Qdrant server. This is great for testing things out and debugging or storing just a small amount of vectors. The embeddings can be kept fully in-memory or persisted on-disk.In-memory
For some testing scenarios and quick experiments, you may prefer to keep all the data in-memory only, so it gets removed when the client is destroyed - usually at the end of your script/notebook.On-disk storage
Local mode, without using the Qdrant server, may also store your vectors on-disk so they persist between runs.On-premise server deployment
No matter if you choose to launch Qdrant locally with a Docker container or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service.Qdrant Cloud
If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you’ll need to provide an API key to secure your deployment from being accessed publicly. The value can also be set in aQDRANT_API_KEY
environment variable.
Using an existing collection
To get an instance oflangchain_qdrant.Qdrant
without loading any new documents or texts, you can use the Qdrant.from_existing_collection()
method.
Manage vector store
Once you have created your vector store, we can interact with it by adding and deleting different items.Add items to vector store
We can add items to our vector store by using theadd_documents
function.
Delete items from vector store
Query vector store
Once your vector store has been created and the relevant documents have been added, you will most likely wish to query it during the running of your chain or agent.Query directly
The simplest scenario for using the Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded into vector embeddings and used to find similar documents in a Qdrant collection.QdrantVectorStore
supports 3 modes for similarity searches. They can be configured using the retrieval_mode
parameter.
- Dense Vector Search (default)
- Sparse Vector Search
- Hybrid Search
Dense Vector Search
Dense vector search involves calculating similarity via vector-based embeddings. To search with only dense vectors:- The
retrieval_mode
parameter should be set toRetrievalMode.DENSE
. This is the default behavior. - A dense embeddings value should be provided to the
embedding
parameter.
Sparse Vector Search
To search with only sparse vectors:- The
retrieval_mode
parameter should be set toRetrievalMode.SPARSE
. - An implementation of the
SparseEmbeddings
interface using any sparse embeddings provider has to be provided as a value to thesparse_embedding
parameter.
langchain-qdrant
package provides a FastEmbed based implementation out of the box.
To use it, install the FastEmbed package.
Hybrid Vector Search
To perform a hybrid search using dense and sparse vectors with score fusion,- The
retrieval_mode
parameter should be set toRetrievalMode.HYBRID
. - A dense embeddings value should be provided to the
embedding
parameter. - An implementation of the
SparseEmbeddings
interface using any sparse embeddings provider has to be provided as a value to thesparse_embedding
parameter.
HYBRID
mode, you can switch to any retrieval mode when searching, since both the dense and sparse vectors are available in the collection.
QdrantVectorStore
, read the API reference
Metadata filtering
Qdrant has an extensive filtering system with rich type support. It is also possible to use the filters in LangChain, by passing an additional param to both thesimilarity_search_with_score
and similarity_search
methods.
Query by turning into retriever
You can also transform the vector store into a retriever for easier usage in your chains.Usage for retrieval-augmented generation
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:Customizing Qdrant
There are options to use an existing Qdrant collection within your LangChain application. In such cases, you may need to define how to map Qdrant point into the LangChainDocument
.
Named vectors
Qdrant supports multiple vectors per point by named vectors. If you work with a collection created externally or want to have the differently named vector used, you can configure it by providing its name.Metadata
Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well. By default, your document is going to be stored in the following payload structure:API reference
For detailed documentation of allQdrantVectorStore
features and configurations head to the API reference: python.langchain.com/api_reference/qdrant/qdrant/langchain_qdrant.qdrant.QdrantVectorStore.html