Skip to main content
LangChain v1.0Welcome to the new LangChain documentation! If you encounter any issues or have feedback, please open an issue so we can improve. Archived v0 documentation can be found here.See the release notes and migration guide for a complete list of changes and instructions on how to upgrade your code.

Overview

LangChain’s createAgent runs on LangGraph’s runtime under the hood. LangGraph exposes a Runtime object with the following information:
  1. Context: static information like user id, db connections, or other dependencies for an agent invocation
  2. Store: a BaseStore instance used for long-term memory
  3. Stream writer: an object used for streaming information via the "custom" stream mode
You can access the runtime information within tools and middleware.

Access

When creating an agent with createAgent, you can specify a contextSchema to define the structure of the context stored in the agent Runtime. When invoking the agent, pass the context argument with the relevant configuration for the run:
import * as z from "zod";
import { createAgent } from "langchain";

const contextSchema = z.object({ 
  userName: z.string(), 
}); 

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [
    /* ... */
  ],
  contextSchema, 
});

const result = await agent.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  { context: { userName: "John Smith" } } 
);

Inside tools

You can access the runtime information inside tools to:
  • Access the context
  • Read or write long-term memory
  • Write to the custom stream (ex, tool progress / updates)
Use the runtime parameter to access the Runtime object inside a tool.
import * as z from "zod";
import { tool } from "langchain";
import { type Runtime } from "@langchain/langgraph"; 

const contextSchema = z.object({
  userName: z.string(),
});

const fetchUserEmailPreferences = tool(
  async (_, runtime: Runtime<z.infer<typeof contextSchema>>) => { 
    const userName = runtime.context?.userName; 
    if (!userName) {
      throw new Error("userName is required");
    }

    let preferences = "The user prefers you to write a brief and polite email.";
    if (runtime.store) { 
      const memory = await runtime.store?.get(["users"], userName); 
      if (memory) {
        preferences = memory.value.preferences;
      }
    }
    return preferences;
  },
  {
    name: "fetch_user_email_preferences",
    description: "Fetch the user's email preferences.",
    schema: z.object({}),
  }
);

Inside middleware

You can access runtime information in middleware to create dynamic prompts, modify messages, or control agent behavior based on user context. Use the runtime parameter to access the Runtime object inside middleware.
import * as z from "zod";
import { createAgent, createMiddleware, type AgentState, SystemMessage } from "langchain";
import { type Runtime } from "@langchain/langgraph"; 

const contextSchema = z.object({
  userName: z.string(),
});

// Dynamic prompt middleware
const dynamicPromptMiddleware = createMiddleware({
  name: "DynamicPrompt",
  beforeModel: (state: AgentState, runtime: Runtime<z.infer<typeof contextSchema>>) => {  
    const userName = runtime.context?.userName;  
    if (!userName) {
      throw new Error("userName is required");
    }

    const systemMsg = `You are a helpful assistant. Address the user as ${userName}.`;
    return {
      messages: [new SystemMessage(systemMsg), ...state.messages]
    };
  }
});

// Logging middleware
const loggingMiddleware = createMiddleware({
  name: "Logging",
  beforeModel: (state: AgentState, runtime: Runtime<z.infer<typeof contextSchema>>) => {  
    console.log(`Processing request for user: ${runtime.context?.userName}`);  
    return;
  },
  afterModel: (state: AgentState, runtime: Runtime<z.infer<typeof contextSchema>>) => {  
    console.log(`Completed request for user: ${runtime.context?.userName}`);  
    return;
  }
});

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [
    /* ... */
  ],
  middleware: [dynamicPromptMiddleware, loggingMiddleware],  
  contextSchema,
});

const result = await agent.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  { context: { userName: "John Smith" } }
);

I