You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.

Overview

LangChain agents use LangGraph persistence to enable long term memory. This is a more advanced topic and requires knowledge of LangGraph to use.

Memory storage

LangGraph stores long-term memories as JSON documents in a store. Each memory is organized under a custom namespace (similar to a folder) and a distinct key (like a file name). Namespaces often include user or org IDs or other labels that makes it easier to organize information. This structure enables hierarchical organization of memories. Cross-namespace searching is then supported through content filters.
import { InMemoryStore } from "@langchain/langgraph";

const embed = (texts: string[]): number[][] => {
    // Replace with an actual embedding function or LangChain embeddings object
    return texts.map(() => [1.0, 2.0]);
};

// InMemoryStore saves data to an in-memory dictionary. Use a DB-backed store in production use.
const store = new InMemoryStore({ index: { embed, dims: 2 } });
const userId = "my-user";
const applicationContext = "chitchat";
const namespace = [userId, applicationContext];

await store.put(
    namespace,
    "a-memory",
    {
        rules: [
            "User likes short, direct language",
            "User only speaks English & TypeScript",
        ],
        "my-key": "my-value",
    }
);

// get the "memory" by ID
const item = await store.get(namespace, "a-memory");

// search for "memories" within this namespace, filtering on content equivalence, sorted by vector similarity
const items = await store.search(
    namespace,
    {
        filter: { "my-key": "my-value" },
        query: "language preferences"
    }
);
For more information about the memory store, see the Persistence guide.

Read long-term memory in tools

A tool the agent can use to look up user information
import { z } from "zod";
import { createAgent, tool } from "langchain";
import { InMemoryStore, type Runtime } from "@langchain/langgraph";

const store = new InMemoryStore();
const contextSchema = z.object({
    userId: z.string(),
});

await store.put(
    ["users"],
    "user_123",
    {
        name: "John Smith",
        language: "English",
    }
);

const getUserInfo = tool(
  // Look up user info.
  async (_, runtime: Runtime<z.infer<typeof contextSchema>>) => {
    // Same as that provided to `createAgent`
    const userId = runtime.context?.userId;
    if (!userId) {
      throw new Error("userId is required");
    }
);

const agent = createAgent({
    model: "openai:gpt-4o-mini",
    tools: [getUserInfo],
    contextSchema,
    store,
});

// Run the agent
const result = await agent.invoke(
    { messages: [{ role: "user", content: "look up user information" }] },
    { context: { userId: "user_123" } }
);

console.log(result.messages.at(-1)?.content);
/**
 * Outputs:
 * User Information:
 * - Name: John Smith
 * - Language: English
 */
  1. The InMemoryStore is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you.
  2. For this example, we write some sample data to the store using the put method. Please see the BaseStore.put API reference for more details.
  3. The first argument is the namespace. This is used to group related data together. In this case, we are using the users namespace to group user data.
  4. A key within the namespace. This example uses a user ID for the key.
  5. The data that we want to store for the given user.
  6. The store is accessible through the config. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created.
  7. The get method is used to retrieve data from the store. The first argument is the namespace, and the second argument is the key. This will return a StoreValue object, which contains the value and metadata about the value.
  8. The store is passed to the agent. This enables the agent to access the store when running tools. You can also use the store from the config to access it from anywhere in your code.

Write long-term memory from tools

Example of a tool that updates user information
import { z } from "zod";
import { tool, createAgent, type AgentRuntime } from "langchain";
import { InMemoryStore, type Runtime } from "@langchain/langgraph";

const store = new InMemoryStore();

const contextSchema = z.object({
    userId: z.string(),
});

const UserInfo = z.object({
    name: z.string(),
});

const saveUserInfo = tool(
  async (userInfo, runtime: Runtime<z.infer<typeof contextSchema>>) => {
    const userId = runtime.context?.userId;
    if (!userId) {
      throw new Error("userId is required");
    }
);

const agent = createAgent({
    model: "openai:gpt-4o-mini",
    tools: [saveUserInfo],
    contextSchema,
    store,
});

// Run the agent
await agent.invoke(
    { messages: [{ role: "user", content: "My name is John Smith" }] },
    { context: { userId: "user_123" } }
);

// You can access the store directly to get the value
const result = await store.get(["users"], "user_123");
console.log(result?.value); // Output: { name: "John Smith" }
  1. The InMemoryStore is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you.
  2. The UserInfo schema defines the structure of the user information. The LLM will use this to format the response according to the schema.
  3. The saveUserInfo function is a tool that allows an agent to update user information. This could be useful for a chat application where the user wants to update their profile information.
  4. The store is accessible through the config. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created.
  5. The put method is used to store data in the store. The first argument is the namespace, and the second argument is the key. This will store the user information in the store.
  6. The userId is passed in the config. This is used to identify the user whose information is being updated.