Documentation Index Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
A vector store stores embedded data and performs similarity search.
Interface
LangChain provides a unified interface for vector stores, allowing you to:
addDocuments - Add documents to the store.
delete - Remove stored documents by ID.
similaritySearch - Query for semantically similar documents.
This abstraction lets you switch between different implementations without altering your application logic.
Initialization
Most vectorstores in LangChain accept an embedding model as an argument when initializing the vector store.
import { OpenAIEmbeddings } from "@langchain/openai" ;
import { MemoryVectorStore } from "@langchain/classic/vectorstores/memory" ;
const embeddings = new OpenAIEmbeddings ( {
model : "text-embedding-3-small" ,
} ) ;
const vectorStore = new MemoryVectorStore (embeddings) ;
Adding documents
You can add documents to the vector store by using the addDocuments function.
import { Document } from "@langchain/core/documents" ;
const document = new Document ( {
pageContent : "Hello world" ,
} ) ;
await vectorStore . addDocuments ([document]) ;
Deleting documents
You can delete documents from the vector store by using the delete function.
await vectorStore . delete ( {
filter : {
pageContent : "Hello world" ,
},
} ) ;
Similarity search
Issue a semantic query using similaritySearch, which returns the closest embedded documents:
const results = await vectorStore . similaritySearch ( "Hello world" , 10 ) ;
Many vector stores support parameters like:
k — number of results to return
filter — conditional filtering based on metadata
Similarity metrics & indexing
Embedding similarity may be computed using:
Cosine similarity
Euclidean distance
Dot product
Efficient search often employs indexing methods such as HNSW (Hierarchical Navigable Small World), though specifics depend on the vector store.
Filtering by metadata (e.g., source, date) can refine search results:
vectorStore . similaritySearch ( "query" , 2 , { source : "tweets" } ) ;
Top integrations
Select embedding model:
Install dependencies: Add environment variables: OPENAI_API_KEY = your-api-key
Instantiate the model: import { OpenAIEmbeddings } from "@langchain/openai" ;
const embeddings = new OpenAIEmbeddings ( {
model : "text-embedding-3-large"
} ) ;
Install dependencies Add environment variables: AZURE_OPENAI_API_INSTANCE_NAME =< YOUR_INSTANCE_NAME >
AZURE_OPENAI_API_KEY =< YOUR_KEY >
AZURE_OPENAI_API_VERSION = "2024-02-01"
Instantiate the model: import { AzureOpenAIEmbeddings } from "@langchain/openai" ;
const embeddings = new AzureOpenAIEmbeddings ( {
azureOpenAIApiEmbeddingsDeploymentName : "text-embedding-ada-002"
} ) ;
Install dependencies: Add environment variables: BEDROCK_AWS_REGION = your-region
Instantiate the model: import { BedrockEmbeddings } from "@langchain/aws" ;
const embeddings = new BedrockEmbeddings ( {
model : "amazon.titan-embed-text-v1"
} ) ;
Install dependencies: npm i @langchain/google-genai
Add environment variables: GOOGLE_API_KEY = your-api-key
Instantiate the model: import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai" ;
const embeddings = new GoogleGenerativeAIEmbeddings ( {
model : "text-embedding-004"
} ) ;
Install dependencies: npm i @langchain/google-vertexai
Add environment variables: GOOGLE_APPLICATION_CREDENTIALS = credentials.json
Instantiate the model: import { VertexAIEmbeddings } from "@langchain/google-vertexai" ;
const embeddings = new VertexAIEmbeddings ( {
model : "gemini-embedding-001"
} ) ;
Install dependencies: npm i @langchain/mistralai
Add environment variables: MISTRAL_API_KEY = your-api-key
Instantiate the model: import { MistralAIEmbeddings } from "@langchain/mistralai" ;
const embeddings = new MistralAIEmbeddings ( {
model : "mistral-embed"
} ) ;
Install dependencies: Add environment variables: COHERE_API_KEY = your-api-key
Instantiate the model: import { CohereEmbeddings } from "@langchain/cohere" ;
const embeddings = new CohereEmbeddings ( {
model : "embed-english-v3.0"
} ) ;
Install dependencies: Instantiate the model: import { OllamaEmbeddings } from "@langchain/ollama" ;
const embeddings = new OllamaEmbeddings ( {
model : "llama2" ,
baseUrl : "http://localhost:11434" , // Default value
} ) ;
Select vector store:
import { MemoryVectorStore } from "@langchain/classic/vectorstores/memory" ;
const vectorStore = new MemoryVectorStore (embeddings) ;
npm i @langchain/community
import { Chroma } from "@langchain/community/vectorstores/chroma" ;
const vectorStore = new Chroma (embeddings , {
collectionName : "a-test-collection" ,
} ) ;
npm i @langchain/community
import { FaissStore } from "@langchain/community/vectorstores/faiss" ;
const vectorStore = new FaissStore (embeddings , {} ) ;
Manual embedding
Automated embedding
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb"
import { MongoClient } from "mongodb" ;
const client = new MongoClient (process . env . MONGODB_ATLAS_URI ! ) ;
const collection = client
. db (process . env . MONGODB_ATLAS_DB_NAME )
. collection (process . env . MONGODB_ATLAS_COLLECTION_NAME ) ;
const vectorStore = new MongoDBAtlasVectorSearch (embeddings , {
collection ,
indexName : "vector_index" ,
textKey : "text" ,
embeddingKey : "embedding" ,
} ) ;
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb"
import { MongoClient } from "mongodb" ;
const client = new MongoClient (process . env . MONGODB_ATLAS_URI ! ) ;
const collection = client
. db (process . env . MONGODB_ATLAS_DB_NAME )
. collection (process . env . MONGODB_ATLAS_COLLECTION_NAME ) ;
const vectorStore = new MongoDBAtlasVectorSearch ( { collection } ) ;
npm i @langchain/community
import { PGVectorStore } from "@langchain/community/vectorstores/pgvector" ;
const vectorStore = await PGVectorStore . initialize (embeddings , {} ) ;
npm i @langchain/pinecone
import { PineconeStore } from "@langchain/pinecone" ;
import { Pinecone as PineconeClient } from "@pinecone-database/pinecone" ;
const pinecone = new PineconeClient () ;
const vectorStore = new PineconeStore (embeddings , {
pineconeIndex ,
maxConcurrency : 5 ,
} ) ;
import { RedisVectorStore } from "@langchain/redis" ;
const vectorStore = new RedisVectorStore (embeddings , {
redisClient : client ,
indexName : "langchainjs-testing" ,
} ) ;
import { QdrantVectorStore } from "@langchain/qdrant" ;
const vectorStore = await QdrantVectorStore . fromExistingCollection (embeddings , {
url : process . env . QDRANT_URL ,
collectionName : "langchainjs-testing" ,
} ) ;
npm i @oracle/langchain-oracledb @langchain/core
import oracledb from "oracledb" ;
import { OracleEmbeddings , OracleVS } from "@oracle/langchain-oracledb" ;
const connection = await oracledb . getConnection ( {
user : process . env . ORACLE_USER ,
password : process . env . ORACLE_PASSWORD ,
connectionString : process . env . ORACLE_DSN ,
} ) ;
const embeddings = new OracleEmbeddings (connection , {
provider : "database" ,
model : process . env . DEMO_ONNX_MODEL ?? "DEMO_MODEL" ,
} ) ;
const vectorStore = new OracleVS (embeddings , {
client : connection ,
tableName : "DEMO_VECTORS" ,
query : "Find support tickets mentioning service outages." ,
distanceStrategy : "DOT" ,
} ) ;
await vectorStore . initialize () ;
npm i @langchain/weaviate
import { WeaviateStore } from "@langchain/weaviate" ;
const vectorStore = new WeaviateStore (embeddings , {
client : weaviateClient ,
indexName : "Langchainjs_test" ,
} ) ;
LangChain.js integrates with a variety of vector stores. You can check out a full list below:
All vector stores
Azure Cosmos DB for NoSQL
Google Cloud SQL for PostgreSQL
Google Vertex AI Matching Engine
SAP HANA Cloud Vector Engine
Momento Vector Index (MVI)