The hard part of building agents (or any LLM application) is making them reliable enough. While they may work for a prototype, they often fail in real-world use cases.
When agents fail, it’s usually because the LLM call inside the agent took the wrong action / didn’t do what we expected. LLMs fail for one of two reasons:
The underlying LLM is not capable enough
The “right” context was not passed to the LLM
More often than not - it’s actually the second reason that causes agents to not be reliable.Context engineering is providing the right information and tools in the right format so the LLM can accomplish a task. This is the number one job of AI Engineers. This lack of “right” context is the number one blocker for more reliable agents, and LangChain’s agent abstractions are uniquely designed to facilitate context engineering.
New to context engineering? Start with the conceptual overview to understand the different types of context and when to use them.
LangChain middleware is the mechanism under the hood that makes context engineering practical for developers using LangChain.Middleware allows you to hook into any step in the agent lifecycle and:
Update context
Jump to a different step in the agent lifecycle
Throughout this guide, you’ll see frequent use of the middleware API as a means to the context engineering end.
Control what goes into each model call - instructions, available tools, which model to use, and output format. These decisions directly impact reliability and cost.
System Prompt
Base instructions from the developer to the LLM.
Messages
The full list of messages (conversation history) sent to the LLM.
Tools
Utilities the agent has access to for taking actions.
Model
The actual model (including configuration) to be called.
Response Format
Schema specification for the model’s final response.
All of these types of model context can draw from state (short-term memory), store (long-term memory), or runtime context (static configuration).
The system prompt sets the LLM’s behavior and capabilities. Different users, contexts, or conversation stages need different instructions. Successful agents draw on memories, preferences, and configuration to provide the right instructions for the current state of the conversation.
State
Store
Runtime Context
Access message count or conversation context from state:
import { createAgent } from "langchain";const agent = createAgent({ model: "gpt-5.4", tools: [...], middleware: [ dynamicSystemPromptMiddleware((state) => { // Read from State: check conversation length const messageCount = state.messages.length; let base = "You are a helpful assistant."; if (messageCount > 10) { base += "\nThis is a long conversation - be extra concise."; } return base; }), ],});
Access user preferences from long-term memory:
import * as z from "zod";import { createAgent, dynamicSystemPromptMiddleware } from "langchain";const contextSchema = z.object({ userId: z.string(),});type Context = z.infer<typeof contextSchema>;const agent = createAgent({ model: "gpt-5.4", tools: [...], contextSchema, middleware: [ dynamicSystemPromptMiddleware<Context>(async (state, runtime) => { const userId = runtime.context.userId; // Read from Store: get user preferences const store = runtime.store; const userPrefs = await store.get(["preferences"], userId); let base = "You are a helpful assistant."; if (userPrefs) { const style = userPrefs.value?.communicationStyle || "balanced"; base += `\nUser prefers ${style} responses.`; } return base; }), ],});
Access user ID or configuration from Runtime Context:
import * as z from "zod";import { createAgent, dynamicSystemPromptMiddleware } from "langchain";const contextSchema = z.object({ userRole: z.string(), deploymentEnv: z.string(),});type Context = z.infer<typeof contextSchema>;const agent = createAgent({ model: "gpt-5.4", tools: [...], contextSchema, middleware: [ dynamicSystemPromptMiddleware<Context>((state, runtime) => { // Read from Runtime Context: user role and environment const userRole = runtime.context.userRole; const env = runtime.context.deploymentEnv; let base = "You are a helpful assistant."; if (userRole === "admin") { base += "\nYou have admin access. You can perform all operations."; } else if (userRole === "viewer") { base += "\nYou have read-only access. Guide users to read operations only."; } if (env === "production") { base += "\nBe extra careful with any data modifications."; } return base; }), ],});
Messages make up the prompt that is sent to the LLM.
It’s critical to manage the content of messages to ensure that the LLM has the right information to respond well.
State
Store
Runtime Context
Inject uploaded file context from State when relevant to current query:
import { createMiddleware } from "langchain";const injectFileContext = createMiddleware({ name: "InjectFileContext", wrapModelCall: (request, handler) => { // request.state is a shortcut for request.state.messages const uploadedFiles = request.state.uploadedFiles || []; if (uploadedFiles.length > 0) { // Build context about available files const fileDescriptions = uploadedFiles.map(file => `- ${file.name} (${file.type}): ${file.summary}` ); const fileContext = `Files you have access to in this conversation:${fileDescriptions.join("\n")}Reference these files when answering questions.`; // Inject file context before recent messages const messages = [ ...request.messages, // Rest of conversation { role: "user", content: fileContext } ]; request = request.override({ messages }); } return handler(request); },});const agent = createAgent({ model: "gpt-5.4", tools: [...], middleware: [injectFileContext],});
Inject user’s email writing style from Store to guide drafting:
import * as z from "zod";import { createMiddleware } from "langchain";const contextSchema = z.object({ userId: z.string(),});const injectWritingStyle = createMiddleware({ name: "InjectWritingStyle", contextSchema, wrapModelCall: async (request, handler) => { const userId = request.runtime.context.userId; // Read from Store: get user's writing style examples const store = request.runtime.store; const writingStyle = await store.get(["writing_style"], userId); if (writingStyle) { const style = writingStyle.value; // Build style guide from stored examples const styleContext = `Your writing style:- Tone: ${style.tone || 'professional'}- Typical greeting: "${style.greeting || 'Hi'}"- Typical sign-off: "${style.signOff || 'Best'}"- Example email you've written:${style.exampleEmail || ''}`; // Append at end - models pay more attention to final messages const messages = [ ...request.messages, { role: "user", content: styleContext } ]; request = request.override({ messages }); } return handler(request); },});
Inject compliance rules from Runtime Context based on user’s jurisdiction:
import * as z from "zod";import { createMiddleware } from "langchain";const contextSchema = z.object({ userJurisdiction: z.string(), industry: z.string(), complianceFrameworks: z.array(z.string()),});type Context = z.infer<typeof contextSchema>;const injectComplianceRules = createMiddleware<Context>({ name: "InjectComplianceRules", contextSchema, wrapModelCall: (request, handler) => { // Read from Runtime Context: get compliance requirements const { userJurisdiction, industry, complianceFrameworks } = request.runtime.context; // Build compliance constraints const rules = []; if (complianceFrameworks.includes("GDPR")) { rules.push("- Must obtain explicit consent before processing personal data"); rules.push("- Users have right to data deletion"); } if (complianceFrameworks.includes("HIPAA")) { rules.push("- Cannot share patient health information without authorization"); rules.push("- Must use secure, encrypted communication"); } if (industry === "finance") { rules.push("- Cannot provide financial advice without proper disclaimers"); } if (rules.length > 0) { const complianceContext = `Compliance requirements for ${userJurisdiction}:${rules.join("\n")}`; // Append at end - models pay more attention to final messages const messages = [ ...request.messages, { role: "user", content: complianceContext } ]; request = request.override({ messages }); } return handler(request); },});
Transient vs Persistent Message Updates:The examples above use wrap_model_call to make transient updates - modifying what messages are sent to the model for a single call without changing what’s saved in state.For persistent updates that modify state, you can:
Return a Command directly from wrapModelCall to inject state updates from the model call layer.
Use life-cycle hooks like beforeModel, afterModel, or wrapToolCall (for tool returns) to update the conversation history. See the middleware documentation for more details.
Tools let the model interact with databases, APIs, and external systems. How you define and select tools directly impacts whether the model can complete tasks effectively.
Each tool needs a clear name, description, argument names, and argument descriptions. These aren’t just metadata—they guide the model’s reasoning about when and how to use the tool.
import { tool } from "@langchain/core/tools";import { z } from "zod";const searchOrders = tool( async ({ userId, status, limit }) => { // Implementation here }, { name: "search_orders", description: `Search for user orders by status. Use this when the user asks about order history or wants to check order status. Always filter by the provided status.`, schema: z.object({ userId: z.string().describe("Unique identifier for the user"), status: z.enum(["pending", "shipped", "delivered"]).describe("Order status to filter by"), limit: z.number().default(10).describe("Maximum number of results to return"), }), });
Not every tool is appropriate for every situation. Too many tools may overwhelm the model (overload context) and increase errors; too few limit capabilities. Dynamic tool selection adapts the available toolset based on authentication state, user permissions, feature flags, or conversation stage.
State
Store
Runtime Context
Enable advanced tools only after certain conversation milestones:
import { createMiddleware } from "langchain";const stateBasedTools = createMiddleware({ name: "StateBasedTools", wrapModelCall: (request, handler) => { // Read from State: check authentication and conversation length const state = request.state; const isAuthenticated = state.authenticated || false; const messageCount = state.messages.length; let filteredTools = request.tools; // Only enable sensitive tools after authentication if (!isAuthenticated) { filteredTools = request.tools.filter(t => t.name.startsWith("public_")); } else if (messageCount < 5) { filteredTools = request.tools.filter(t => t.name !== "advanced_search"); } return handler({ ...request, tools: filteredTools }); },});
Filter tools based on user preferences or feature flags in Store:
import * as z from "zod";import { createMiddleware } from "langchain";const contextSchema = z.object({ userId: z.string(),});const storeBasedTools = createMiddleware({ name: "StoreBasedTools", contextSchema, wrapModelCall: async (request, handler) => { const userId = request.runtime.context.userId; // Read from Store: get user's enabled features const store = request.runtime.store; const featureFlags = await store.get(["features"], userId); let filteredTools = request.tools; if (featureFlags) { const enabledFeatures = featureFlags.value?.enabledTools || []; filteredTools = request.tools.filter(t => enabledFeatures.includes(t.name)); } return handler({ ...request, tools: filteredTools }); },});
Filter tools based on user permissions from Runtime Context:
import * as z from "zod";import { createMiddleware } from "langchain";const contextSchema = z.object({ userRole: z.string(),});const contextBasedTools = createMiddleware({ name: "ContextBasedTools", contextSchema, wrapModelCall: (request, handler) => { // Read from Runtime Context: get user role const userRole = request.runtime.context.userRole; let filteredTools = request.tools; if (userRole === "admin") { // Admins get all tools } else if (userRole === "editor") { filteredTools = request.tools.filter(t => t.name !== "delete_data"); } else { filteredTools = request.tools.filter(t => t.name.startsWith("read_")); } return handler({ ...request, tools: filteredTools }); },});
See Dynamic tools for both filtering pre-registered tools and registering tools at runtime (e.g., from MCP servers).
Different models have different strengths, costs, and context windows. Select the right model for the task at hand, which
might change during an agent run.
State
Store
Runtime Context
Use different models based on conversation length from State:
import { createMiddleware, initChatModel } from "langchain";// Initialize models once outside the middlewareconst largeModel = initChatModel("claude-sonnet-4-6");const standardModel = initChatModel("gpt-5.4");const efficientModel = initChatModel("gpt-5.4-mini");const stateBasedModel = createMiddleware({ name: "StateBasedModel", wrapModelCall: (request, handler) => { // request.messages is a shortcut for request.state.messages const messageCount = request.messages.length; let model; if (messageCount > 20) { model = largeModel; } else if (messageCount > 10) { model = standardModel; } else { model = efficientModel; } return handler({ ...request, model }); },});
Use user’s preferred model from Store:
import * as z from "zod";import { createMiddleware, initChatModel } from "langchain";const contextSchema = z.object({ userId: z.string(),});// Initialize available models onceconst MODEL_MAP = { "gpt-5.4": initChatModel("gpt-5.4"), "gpt-5.4-mini": initChatModel("gpt-5.4-mini"), "claude-sonnet": initChatModel("claude-sonnet-4-6"),};const storeBasedModel = createMiddleware({ name: "StoreBasedModel", contextSchema, wrapModelCall: async (request, handler) => { const userId = request.runtime.context.userId; // Read from Store: get user's preferred model const store = request.runtime.store; const userPrefs = await store.get(["preferences"], userId); let model = request.model; if (userPrefs) { const preferredModel = userPrefs.value?.preferredModel; if (preferredModel && MODEL_MAP[preferredModel]) { model = MODEL_MAP[preferredModel]; } } return handler({ ...request, model }); },});
Select model based on cost limits or environment from Runtime Context:
import * as z from "zod";import { createMiddleware, initChatModel } from "langchain";const contextSchema = z.object({ costTier: z.string(), environment: z.string(),});// Initialize models once outside the middlewareconst premiumModel = initChatModel("claude-sonnet-4-6");const standardModel = initChatModel("gpt-5.4");const budgetModel = initChatModel("gpt-5.4-mini");const contextBasedModel = createMiddleware({ name: "ContextBasedModel", contextSchema, wrapModelCall: (request, handler) => { // Read from Runtime Context: cost tier and environment const costTier = request.runtime.context.costTier; const environment = request.runtime.context.environment; let model; if (environment === "production" && costTier === "premium") { model = premiumModel; } else if (costTier === "budget") { model = budgetModel; } else { model = standardModel; } return handler({ ...request, model }); },});
Structured output transforms unstructured text into validated, structured data. When extracting specific fields or returning data for downstream systems, free-form text isn’t sufficient.How it works: When you provide a schema as the response format, the model’s final response is guaranteed to conform to that schema. The agent runs the model / tool calling loop until the model is done calling tools, then the final response is coerced into the provided format.
Dynamic response format selection adapts schemas based on user preferences, conversation stage, or role—returning simple formats early and detailed formats as complexity increases.
State
Store
Runtime Context
Configure structured output based on conversation state:
import { createMiddleware } from "langchain";import { z } from "zod";const simpleResponse = z.object({ answer: z.string().describe("A brief answer"),});const detailedResponse = z.object({ answer: z.string().describe("A detailed answer"), reasoning: z.string().describe("Explanation of reasoning"), confidence: z.number().describe("Confidence score 0-1"),});const stateBasedOutput = createMiddleware({ name: "StateBasedOutput", wrapModelCall: (request, handler) => { // request.state is a shortcut for request.state.messages const messageCount = request.messages.length; let responseFormat; if (messageCount < 3) { // Early conversation - use simple format responseFormat = simpleResponse; } else { // Established conversation - use detailed format responseFormat = detailedResponse; } return handler({ ...request, responseFormat }); },});
Configure output format based on user preferences in Store:
Tools are special in that they both read and write context.In the most basic case, when a tool executes, it receives the LLM’s request parameters and returns a tool message back. The tool does its work and produces a result.Tools can also fetch important information for the model that allows it to perform and complete tasks.
Most real-world tools need more than just the LLM’s parameters. They need user IDs for database queries, API keys for external services, or current session state to make decisions. Tools read from state, store, and runtime context to access this information.
State
Store
Runtime Context
Read from State to check current session information:
import * as z from "zod";import { createAgent, tool, type ToolRuntime } from "langchain";const checkAuthentication = tool( async (_, runtime: ToolRuntime) => { // Read from State: check current auth status const currentState = runtime.state; const isAuthenticated = currentState.authenticated || false; if (isAuthenticated) { return "User is authenticated"; } else { return "User is not authenticated"; } }, { name: "check_authentication", description: "Check if user is authenticated", schema: z.object({}), });
Read from Store to access persisted user preferences:
import * as z from "zod";import { createAgent, tool, type ToolRuntime } from "langchain";const contextSchema = z.object({ userId: z.string(),});const getPreference = tool( async ({ preferenceKey }, runtime: ToolRuntime) => { const userId = runtime.context.userId; // Read from Store: get existing preferences const store = runtime.store; const existingPrefs = await store.get(["preferences"], userId); if (existingPrefs) { const value = existingPrefs.value?.[preferenceKey]; return value ? `${preferenceKey}: ${value}` : `No preference set for ${preferenceKey}`; } else { return "No preferences found"; } }, { name: "get_preference", description: "Get user preference from Store", schema: z.object({ preferenceKey: z.string(), }), });
Read from Runtime Context for configuration like API keys and user IDs:
import * as z from "zod";import { tool } from "@langchain/core/tools";import { createAgent } from "langchain";const contextSchema = z.object({ userId: z.string(), apiKey: z.string(), dbConnection: z.string(),});const fetchUserData = tool( async ({ query }, runtime: ToolRuntime<any, typeof contextSchema>) => { // Read from Runtime Context: get API key and DB connection const { userId, apiKey, dbConnection } = runtime.context; // Use configuration to fetch data const results = await performDatabaseQuery(dbConnection, query, apiKey); return `Found ${results.length} results for user ${userId}`; }, { name: "fetch_user_data", description: "Fetch data using Runtime Context configuration", schema: z.object({ query: z.string(), }), });const agent = createAgent({ model: "gpt-5.4", tools: [fetchUserData], contextSchema,});
Tool results can be used to help an agent complete a given task. Tools can both return results directly to the model
and update the memory of the agent to make important context available to future steps.
State
Store
Write to State to track session-specific information using Command:
import * as z from "zod";import { tool } from "@langchain/core/tools";import { createAgent } from "langchain";import { Command } from "@langchain/langgraph";const authenticateUser = tool( async ({ password }) => { // Perform authentication if (password === "correct") { // Write to State: mark as authenticated using Command return new Command({ update: { authenticated: true }, }); } else { return new Command({ update: { authenticated: false } }); } }, { name: "authenticate_user", description: "Authenticate user and update State", schema: z.object({ password: z.string(), }), });
Write to Store to persist data across sessions:
import * as z from "zod";import { createAgent, tool, type ToolRuntime } from "langchain";const savePreference = tool( async ({ preferenceKey, preferenceValue }, runtime: ToolRuntime<any, typeof contextSchema>) => { const userId = runtime.context.userId; // Read existing preferences const store = runtime.store; const existingPrefs = await store.get(["preferences"], userId); // Merge with new preference const prefs = existingPrefs?.value || {}; prefs[preferenceKey] = preferenceValue; // Write to Store: save updated preferences await store.put(["preferences"], userId, prefs); return `Saved preference: ${preferenceKey} = ${preferenceValue}`; }, { name: "save_preference", description: "Save user preference to Store", schema: z.object({ preferenceKey: z.string(), preferenceValue: z.string(), }), });
See Tools for comprehensive examples of accessing state, store, and runtime context in tools.
Control what happens between the core agent steps - intercepting data flow to implement cross-cutting concerns like summarization, guardrails, and logging.As you’ve seen in Model Context and Tool Context, middleware is the mechanism that makes context engineering practical. Middleware allows you to hook into any step in the agent lifecycle and either:
Update context - Modify state and store to persist changes, update conversation history, or save insights
Jump in the lifecycle - Move to different steps in the agent cycle based on context (e.g., skip tool execution if a condition is met, repeat model call with modified context)
One of the most common life-cycle patterns is automatically condensing conversation history when it gets too long. Unlike the transient message trimming shown in Model Context, summarization persistently updates state - permanently replacing old messages with a summary that’s saved for all future turns.LangChain offers built-in middleware for this: