You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.

Overview

LangChain’s create_agent runs on LangGraph’s runtime under the hood. LangGraph exposes a Runtime object with the following information:
  1. Context: static information like user id, db connections, or other dependencies for an agent invocation
  2. Store: a BaseStore instance used for long term memory
  3. Stream writer: an object used for streaming information via the "custom"" stream mode
You can access the runtime information within tools, prompt, and pre and post model hooks.

Access

When creating an agent with create_agent, you can specify a context_schema to define the structure of the context stored in the agent runtime. When invoking the agent, pass the context argument with the relevant configuration for the run:
from dataclasses import dataclass

from langchain_core.messages import AnyMessage
from langchain.agents import create_agent
from langgraph.runtime import get_runtime

@dataclass
class Context:
    user_name: str

agent = create_agent(
    model="openai:gpt-5-nano",
    tools=[...],
    context_schema=Context  
)

agent.invoke(
    {"messages": [{"role": "user", "content": "What's my name?"}]},
    context=Context(user_name="John Smith")  
)

Inside tools

You can access the runtime information inside tools to:
  • Access the context
  • Read or write long term memory
  • Write to the custom stream (ex, tool progress / updates)
Use the get_runtime function from langgraph.runtime to access the Runtime object inside a tool.
from typing import Any

from langchain_core.tools import tool

@tool
def fetch_user_email_preferences() -> str:
    runtime = get_runtime(Context)
    user_id = runtime.context.user_id

    preferences: str = "The user prefers you to write a brief and polite email."
    if runtime.store:
        if memory := runtime.store.get(("users",), user_id):
            preferences = memory.value["preferences"]

    return preferences

Inside prompt

Use the get_runtime function from langgraph.runtime to access the Runtime object inside a prompt function.
from dataclasses import dataclass

from langchain_core.messages import AnyMessage
from langchain.agents import create_agent
from langgraph.runtime import get_runtime

@dataclass
class Context:
    user_name: str

def my_prompt(state: State) -> list[AnyMessage]:
    runtime = get_runtime(Context)
    system_msg = (
        "You are a helpful assistant. "
        f"Address the user as {runtime.context.user_name}."
    )
    return [{"role": "system", "content": system_msg}] + state["messages"]

agent = create_agent(
    model="openai:gpt-5-nano",
    tools=[...],
    prompt=my_prompt,
    context_schema=Context
)

agent.invoke(
    {"messages": [{"role": "user", "content": "What's my name?"}]},
    context=Context(user_name="John Smith")
)

Inside pre and post model hooks

To access the underlying graph runtime information in a pre or post model hook, you can:
  1. Use the get_runtime function from langgraph.runtime to access the Runtime object inside the hook
  2. Inject the Runtime directly via the hook signature
This above options are purely preferential and not functionally different.
from langgraph.runtime import get_runtime

def pre_model_hook(state: State) -> State:
    runtime = get_runtime(Context)
    ...