You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.

Overview

LangChain’s createAgent runs on LangGraph’s runtime under the hood. LangGraph exposes a Runtime object with the following information:
  1. Context: static information like user id, db connections, or other dependencies for an agent invocation
  2. Store: a BaseStore instance used for long term memory
  3. Stream writer: an object used for streaming information via the "custom"" stream mode
You can access the runtime information within tools, prompt, and pre and post model hooks.

Access

When creating an agent with createAgent, you can specify a contextSchema to define the structure of the context stored in the agent runtime. When invoking the agent, pass the context argument with the relevant configuration for the run:
import { z } from "zod";
import { createAgent } from "langchain";

const contextSchema = z.object({ 
  userName: z.string(), 
}); 

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [
    /* ... */
  ],
  contextSchema, 
});

const result = await agent.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  { context: { userName: "John Smith" } } 
);

Inside tools

You can access the runtime information inside tools to:
  • Access the context
  • Read or write long term memory
  • Write to the custom stream (ex, tool progress / updates)
Use the runtime parameter to access the Runtime object inside a tool.
import { z } from "zod";
import { tool } from "langchain";
import { type Runtime } from "@langchain/langgraph";

const contextSchema = z.object({
  userName: z.string(),
});

const fetchUserEmailPreferences = tool(
  async (_, runtime: Runtime<z.infer<typeof contextSchema>>) => {
    const userName = runtime.context?.userName;
    if (!userName) {
      throw new Error("userName is required");
    }

    let preferences = "The user prefers you to write a brief and polite email.";
    if (runtime.store) {
      const memory = await runtime.store?.get(["users"], userName);
      if (memory) {
        preferences = memory.value.preferences;
      }
    }
    return preferences;
  },
  {
    name: "fetch_user_email_preferences",
    description: "Fetch the user's email preferences.",
    schema: z.object({}),
  }
);

Inside prompt

Use the runtime parameter to access the Runtime object inside a prompt function.
import { z } from "zod";
import { createAgent, type AgentState, SystemMessage } from "langchain";
import { type Runtime } from "@langchain/langgraph";

const contextSchema = z.object({
  userName: z.string(),
});

const prompt = (
  state: AgentState,
  runtime: Runtime<z.infer<typeof contextSchema>>
) => {
  const userName = runtime.context?.userName;
  if (!userName) {
    throw new Error("userName is required");
  }

  const systemMsg = `You are a helpful assistant. Address the user as ${userName}.`;
  return [new SystemMessage(systemMsg), ...state.messages];
};

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [
    /* ... */
  ],
  prompt,
  contextSchema,
});

const result = await agent.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  { context: { userName: "John Smith" } }
);

Inside pre and post model hooks

Use the runtime parameter to access the Runtime object inside a pre or post model hook.
import { z } from "zod";
import { type Runtime } from "@langchain/langgraph";
import { createAgent, type AgentState } from "langchain";

const contextSchema = z.object({
  userName: z.string(),
});

const preModelHook = (
  state: AgentState,
  runtime: Runtime<z.infer<typeof contextSchema>>
) => {
  const userName = runtime.context?.userName;
  if (!userName) {
    throw new Error("userName is required");
  }

  return {
    // ...
  };
};

const postModelHook = (
  state: AgentState,
  runtime: Runtime<z.infer<typeof contextSchema>>
) => {
  const userName = runtime.context?.userName;
  if (!userName) {
    throw new Error("userName is required");
  }

  return {
    // ...
  };
};

const agent = createAgent({
  model: "openai:gpt-4o-mini",
  tools: [
    /* ... */
  ],
  contextSchema,
  preModelHook,
  postModelHook,
});