You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.

Overview

Memory is a system that remembers information about previous interactions. For AI agents, memory is crucial because it lets them remember previous interactions, learn from feedback, and adapt to user preferences. As agents tackle more complex tasks with numerous user interactions, this capability becomes essential for both efficiency and user satisfaction. Short term memory lets your application remember previous interactions within a single thread or conversation.
A thread organizes multiple interactions in a session, similar to the way email groups messages in a single conversation.
Conversation history is the most common form of short-term memory. Long conversations pose a challenge to today’s LLMs; a full history may not fit inside an LLM’s context window, resulting in an context loss or errors. Even if your model supports the full context length, most LLMs still perform poorly over long contexts. They get “distracted” by stale or off-topic content, all while suffering from slower response times and higher costs. Chat models accept context using messages, which include instructions (a system message) and inputs (human messages). In chat applications, messages alternate between human inputs and model responses, resulting in a list of messages that grows longer over time. Because context windows are limited, many applications can benefit from using techniques to remove or “forget” stale information.

Usage

To add short-term memory (thread-level persistence) to an agent, you need to specify a checkpointer when creating an agent.
LangChain’s agent manages short-term memory as a part of your agent’s state.By storing these in the graph’s state, the agent can access the full context for a given conversation while maintaining separation between different threads.State is persisted to a database (or memory) using a checkpointer so the thread can be resumed at any time.Short-term memory updates when the agent is invoked or a step (like a tool call) is completed, and the state is read at the start of each step.
import { createAgent, MemorySaver } from "langchain";

const checkpointer = new MemorySaver();

const agent = createAgent({
    model: "anthropic:claude-3-7-sonnet-latest",
    tools: [],
    checkpointer,
});

await agent.invoke(
    { messages: [{ role: "user", content: "hi! i am Bob" }] },
    { configurable: { thread_id: "1" } }
);

In production

In production, use a checkpointer backed by a database:
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";

const DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable";
const checkpointer = PostgresSaver.fromConnString(DB_URI);

Customizing agent memory

By default, agents use AgentState to manage short term memory, specifically the conversation history via a messages key. Users can subclass AgentState to add additional fields to the state. This custom state can then be accessed via tools and dynamic prompt / model functions.
import { z } from "zod";
import { createAgent, MemorySaver } from "langchain";

const stateSchema = z.object({
    messages: z.array(z.any()),
});

const checkpointer = new MemorySaver();
const agent = createAgent({
    model: "openai:gpt-5",
    tools: [],
    stateSchema,
    checkpointer,
});

Common patterns

With short-term memory enabled, long conversations can exceed the LLM’s context window. Common solutions are: This allows the agent to keep track of the conversation without exceeding the LLM’s context window.

Trim messages

Most LLMs have a maximum supported context window (denominated in tokens). One way to decide when to truncate messages is to count the tokens in the message history and truncate whenever it approaches that limit. If you’re using LangChain, you can use the trim messages utility and specify the number of tokens to keep from the list, as well as the strategy (e.g., keep the last maxTokens) to use for handling the boundary. To trim message history in an agent, use stateModifier with the trimMessages function:
import {
    createAgent,
    trimMessages,
    MemorySaver,
    type AgentState,
} from "langchain";

// This function will be called every time before the node that calls LLM
const stateModifier = async (state: AgentState) => {
    return {
        messages: await trimMessages(state.messages, {
        strategy: "last",
        maxTokens: 384,
        startOn: "human",
        endOn: ["human", "tool"],
        tokenCounter: (msgs) => msgs.length,
        }),
    };
};

const checkpointer = new MemorySaver();
const agent = createAgent({
    model: "openai:gpt-5",
    tools: [],
    preModelHook: stateModifier,
    checkpointer,
});

Delete messages

You can delete messages from the graph state to manage the message history. This is useful when you want to remove specific messages or clear the entire message history. To delete messages from the graph state, you can use the RemoveMessage. For RemoveMessage to work, you need to use a state key with messagesStateReducer reducer, like MessagesZodState. To remove specific messages:
import { RemoveMessage } from "@langchain/core/messages";

const deleteMessages = (state) => {
    const messages = state.messages;
    if (messages.length > 2) {
        // remove the earliest two messages
        return {
        messages: messages
            .slice(0, 2)
            .map((m) => new RemoveMessage({ id: m.id })),
        };
    }
};
When deleting messages, make sure that the resulting message history is valid. Check the limitations of the LLM provider you’re using. For example:
  • Some providers expect message history to start with a user message
  • Most providers require assistant messages with tool calls to be followed by corresponding tool result messages.
import { RemoveMessage } from "@langchain/core/messages";
import { AgentState, createAgent, MemorySaver } from "langchain";

const deleteMessages = (state: AgentState) => {
    const messages = state.messages;
    if (messages.length > 2) {
        // remove the earliest two messages
        return {
        messages: messages
            .slice(0, 2)
            .map((m) => new RemoveMessage({ id: m.id! })),
        };
    }
    return {};
};

const agent = createAgent({
    model: "openai:gpt-5-nano",
    tools: [],
    prompt: "Please be concise and to the point.",
    postModelHook: deleteMessages,
    checkpointer: new MemorySaver(),
});

const config = { configurable: { thread_id: "1" } };

const streamA = await agent.stream(
    { messages: [{ role: "user", content: "hi! I'm bob" }] },
    { ...config, streamMode: "values" }
);
for await (const event of streamA) {
    const messageDetails = event.messages.map((message) => [
        message.getType(),
        message.content,
    ]);
    console.log(messageDetails);
}

const streamB = await agent.stream(
    {
        messages: [{ role: "user", content: "what's my name?" }],
    },
    { ...config, streamMode: "values" }
);
for await (const event of streamB) {
    const messageDetails = event.messages.map((message) => [
        message.getType(),
        message.content,
    ]);
    console.log(messageDetails);
}
[['human', "hi! I'm bob"]]
[['human', "hi! I'm bob"], ['ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?']]
[['human', "hi! I'm bob"], ['ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?'], ['human', "what's my name?"]]
[['human', "hi! I'm bob"], ['ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?'], ['human', "what's my name?"], ['ai', 'Your name is Bob.']]
[['human', "what's my name?"], ['ai', 'Your name is Bob.']]

Summarize messages

The problem with trimming or removing messages, as shown above, is that you may lose information from culling of the message queue. Because of this, some applications benefit from a more sophisticated approach of summarizing the message history using a chat model. TODO

Access

You can access the short-term memory of an agent in a few different ways:

Tools

Read short-term memory in a tool

Access short term memory (state) in a tool by injecting the agent’s state into the tool signature with the InjectedState annotation. This annotation hides the state from the tool signature (so the model doesn’t see it), but the tool can access it.
import { z } from "zod";
import { createAgent, tool } from "langchain";

const stateSchema = z.object({
    userId: z.string(),
});

const getUserInfo = tool(
    async (_, config) => {
        const userId = config.context?.userId;
        return { userId };
    },
    {
        name: "get_user_info",
        description: "Get user info",
        schema: z.object({}),
    }
);

const agent = createAgent({
    model: "openai:gpt-5-nano",
    tools: [getUserInfo],
    stateSchema,
});

const result = await agent.invoke(
    {
        messages: [{ role: "user", content: "what's my name?" }],
    },
    {
        context: {
        userId: "user_123",
        },
    }
);

console.log(result.messages.at(-1)?.content);
// Outputs: "User is John Smith."

Write short-term memory from tools

To modify the agent’s short-term memory (state) during execution, you can return state updates directly from the tools. This is useful for persisting intermediate results or making information accessible to subsequent tools or prompts.
import { z } from "zod";
import { tool, createAgent } from "langchain";
import { MessagesZodState, Command } from "@langchain/langgraph";

const CustomState = z.object({
    messages: MessagesZodState.shape.messages,
    userName: z.string().optional(),
});

const updateUserInfo = tool(
    async (_, config) => {
        const userId = config.context?.userId;
        const name = userId === "user_123" ? "John Smith" : "Unknown user";
        return new Command({
        update: {
            userName: name,
            // update the message history
            messages: [
            {
                role: "tool",
                content: "Successfully looked up user information",
                tool_call_id: config.toolCall?.id,
            },
            ],
        },
        });
    },
    {
        name: "update_user_info",
        description: "Look up and update user info.",
        schema: z.object({}),
    }
);

const greet = tool(
    async (_, config) => {
        const userName = config.context?.userName;
        return `Hello ${userName}!`;
    },
    {
        name: "greet",
        description: "Use this to greet the user once you found their info.",
        schema: z.object({}),
    }
);

const agent = createAgent({
    llm: model,
    tools: [updateUserInfo, greet],
    stateSchema: CustomState,
});

await agent.invoke(
    { messages: [{ role: "user", content: "greet the user" }] },
    { context: { userId: "user_123" } }
);

Prompt

Access short term memory (state) in a dynamic prompt function by injecting the agent’s state into the prompt function signature.
import { z } from "zod";
import { createAgent, tool, SystemMessage } from "langchain";

const contextSchema = z.object({
    userName: z.string(),
});

const getWeather = tool(
    async ({ city }, config) => {
        return `The weather in ${city} is always sunny!`;
    },
    {
        name: "get_weather",
        description: "Get user info",
        schema: z.object({
        city: z.string(),
        }),
    }
);

const agent = createAgent({
    model: "openai:gpt-5-nano",
    tools: [getWeather],
    contextSchema,
    prompt: (state, config) => {
        return [
        new SystemMessage(
            `You are a helpful assistant. Address the user as ${config.context?.userName}.`
        ),
        ...state.messages,
        ];
    },
});

const result = await agent.invoke(
    {
        messages: [{ role: "user", content: "What is the weather in SF?" }],
    },
    {
        context: {
        userName: "John Smith",
        },
    }
);

for (const message of result.messages) {
    console.log(message);
}
/**
 * HumanMessage {
 *   "content": "What is the weather in SF?",
 *   // ...
 * }
 * AIMessage {
 *   // ...
 *   "tool_calls": [
 *     {
 *       "name": "get_weather",
 *       "args": {
 *         "city": "San Francisco"
 *       },
 *       "type": "tool_call",
 *       "id": "call_tCidbv0apTpQpEWb3O2zQ4Yx"
 *     }
 *   ],
 *   // ...
 * }
 * ToolMessage {
 *   "content": "The weather in San Francisco is always sunny!",
 *   "tool_call_id": "call_tCidbv0apTpQpEWb3O2zQ4Yx"
 *   // ...
 * }
 * AIMessage {
 *   "content": "John Smith, here’s the latest: The weather in San Francisco is always sunny!\n\nIf you’d like more details (temperature, wind, humidity) or a forecast for the next few days, I can pull that up. What would you like?",
 *   // ...
 * }
 */

Pre model hook

Access short term memory (state) in a pre model hook by injecting the agent’s state into the hook signature.
import {
    createAgent,
    type AgentState,
    trimMessages,
    type BaseMessage,
} from "langchain";

const preModelHook = async (state: AgentState) => {
    return {
        messages: await trimMessages(state.messages, {
        maxTokens: 384,
        strategy: "last",
        startOn: "human",
        endOn: ["human", "tool"],
        tokenCounter: (msgs: BaseMessage[]) => msgs.length,
        }),
    };
};

const agent = createAgent({
    model: "openai:gpt-5-nano",
    tools: [],
    preModelHook,
});

const result = await agent.invoke(
    {
        messages: [{ role: "user", content: "hi, my name is bob" }],
    },
    {
        context: { thread_id: "1" },
    }
);
console.log(result.messages.at(-1)?.content);

Post model hook

Access short term memory (state) in a post model hook by injecting the agent’s state into the hook signature.
import { RemoveMessage } from "@langchain/core/messages";
import { createAgent, type AgentState } from "langchain";

function validateResponse(state: AgentState) {
    const lastMessage = state.messages.at(-1)?.content;
    if (
        typeof lastMessage === "string" &&
        lastMessage.toLowerCase().includes("confidential")
    ) {
        return {
        messages: [new RemoveMessage({ id: "all" }), ...state.messages],
        };
    }
    return {};
}

const agent = createAgent({
    model: "openai:gpt-5",
    tools: [],
    postModelHook: validateResponse,
});