This guide explains how to add semantic search to your deployment’s cross-thread store, so that your agent can search for memories and other documents by semantic similarity.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A deployment (refer to how to set up an application for deployment) and details on hosting options.
- API keys for your embedding provider (in this case, OpenAI).
langchain >= 0.3.8(if you specify using the string format below).
Steps
- Update your
langgraph.jsonconfiguration file to include the store configuration:
- Uses OpenAI’s text-embedding-3-small model for generating embeddings
- Sets the embedding dimension to 1536 (matching the model’s output)
- Indexes all fields in your stored data (
["$"]means index everything, or specify specific fields like["text", "metadata.title"])
Each deployment supports a single embedding model. Configuring multiple embedding models is not supported, as it would cause ambiguity in
/store endpoints and result in mixed-index issues.- To use the string embedding format above, make sure your dependencies include
langchain >= 0.3.8:
Usage
Once configured, you can use semantic search in your nodes. The store requires a namespace tuple to organize memories:SearchItem (extends Item with an additional score field). When semantic search is configured, score contains the similarity score:
Changing your embedding model
Custom embeddings
If you want to use custom embeddings, you can pass a path to a custom embedding function:Querying via the API
You can also query the store using the LangGraph SDK. Since the SDK uses async operations:score field when semantic search is configured:
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

