Use this file to discover all available pages before exploring further.
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. LangChain agents can use tools defined on MCP servers using the langchain-mcp-adapters library.
langchain-mcp-adapters enables agents to use tools defined across one or more MCP servers.
MultiServerMCPClient is stateless by default. Each tool invocation creates a fresh MCP ClientSession, executes the tool, and then cleans up. See the stateful sessions section for more details.
Accessing multiple MCP servers
import asynciofrom langchain_mcp_adapters.client import MultiServerMCPClient from langchain.agents import create_agentasync def main(): client = MultiServerMCPClient( { "math": { "transport": "stdio", # Local subprocess communication "command": "python", # Absolute path to your math_server.py file "args": ["/path/to/math_server.py"], }, "weather": { "transport": "http", # HTTP-based remote server # Ensure you start your weather server on port 8000 "url": "http://localhost:8000/mcp", } } ) tools = await client.get_tools() agent = create_agent( "claude-sonnet-4-6", tools ) math_response = await agent.ainvoke( {"messages": [{"role": "user", "content": "what's (3 + 5) x 12?"}]} ) weather_response = await agent.ainvoke( {"messages": [{"role": "user", "content": "what is the weather in nyc?"}]} ) print(math_response) print(weather_response)if __name__ == "__main__": asyncio.run(main())
The http transport (also referred to as streamable-http) uses HTTP requests for client-server communication. See the MCP HTTP transport specification for more details.
When connecting to MCP servers over HTTP, you can include custom headers (e.g., for authentication or tracing) using the headers field in the connection configuration. This is supported for sse (deprecated by MCP spec) and streamable_http transports.
The langchain-mcp-adapters library uses the official MCP SDK under the hood, which allows you to provide a custom authentication mechanism by implementing the httpx.Auth interface.
Client launches server as a subprocess and communicates via standard input/output. Best for local tools and simple setups.
Unlike HTTP transports, stdio connections are inherently stateful: the subprocess persists for the lifetime of the client connection. However, when using MultiServerMCPClient without explicit session management, each tool call still creates a new session. See stateful sessions for managing persistent connections.
By default, MultiServerMCPClient is stateless: each tool invocation creates a fresh MCP session, executes the tool, and then cleans up.If you need to control the lifecycle of an MCP session (for example, when working with a stateful server that maintains context across tool calls), you can create a persistent ClientSession using client.session().
Using MCP ClientSession for stateful tool usage
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_mcp_adapters.tools import load_mcp_toolsfrom langchain.agents import create_agentclient = MultiServerMCPClient({...})# Create a session explicitlyasync with client.session("server_name") as session: # Pass the session to load tools, resources, or prompts tools = await load_mcp_tools(session) agent = create_agent( "google_genai:gemini-3.1-pro-preview", tools )
Tools allow MCP servers to expose executable functions that LLMs can invoke to perform actions—such as querying databases, calling APIs, or interacting with external systems. LangChain converts MCP tools into LangChain tools, making them directly usable in any LangChain agent or workflow.
MCP tools can return structured content alongside the human-readable text response. This is useful when a tool needs to return machine-parseable data (like JSON) in addition to text that gets shown to the model.When an MCP tool returns structuredContent, the adapter wraps it in an MCPToolArtifact and returns it as the tool’s artifact. You can access this using the artifact field on the ToolMessage. You can also use interceptors to process or transform structured content automatically.Extracting structured content from artifactAfter invoking your agent, you can access the structured content from tool messages in the response:
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain.agents import create_agentfrom langchain.messages import ToolMessageclient = MultiServerMCPClient({...})tools = await client.get_tools()agent = create_agent("claude-sonnet-4-6", tools)result = await agent.ainvoke( {"messages": [{"role": "user", "content": "Get data from the server"}]})# Extract structured content from tool messagesfor message in result["messages"]: if isinstance(message, ToolMessage) and message.artifact: structured_content = message.artifact["structured_content"]
Appending structured content via interceptorIf you want structured content to be visible in the conversation history (visible to the model), you can use an interceptor to automatically append structured content to the tool result:
import jsonfrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_mcp_adapters.interceptors import MCPToolCallRequestfrom mcp.types import TextContentasync def append_structured_content(request: MCPToolCallRequest, handler): """Append structured content from artifact to tool message.""" result = await handler(request) if result.structuredContent: result.content += [ TextContent(type="text", text=json.dumps(result.structuredContent)), ] return resultclient = MultiServerMCPClient({...}, tool_interceptors=[append_structured_content])
MCP tools can return multimodal content (images, text, etc.) in their responses. When an MCP server returns content with multiple parts (e.g., text and images), the adapter converts them to LangChain’s standard content blocks. You can access the standardized representation via the content_blocks property on the ToolMessage:
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain.agents import create_agentclient = MultiServerMCPClient({...})tools = await client.get_tools()agent = create_agent("claude-sonnet-4-6", tools)result = await agent.ainvoke( {"messages": [{"role": "user", "content": "Take a screenshot of the current page"}]})# Access multimodal content from tool messagesfor message in result["messages"]: if message.type == "tool": # Raw content in provider-native format print(f"Raw content: {message.content}") # Standardized content blocks # for block in message.content_blocks: if block["type"] == "text": print(f"Text: {block['text']}") elif block["type"] == "image": print(f"Image URL: {block.get('url')}") print(f"Image base64: {block.get('base64', '')[:50]}...")
This allows you to handle multimodal tool responses in a provider-agnostic way, regardless of how the underlying MCP server formats its content.
Resources allow MCP servers to expose data—such as files, database records, or API responses—that can be read by clients. LangChain converts MCP resources into Blob objects, which provide a unified interface for handling both text and binary content.
Use client.get_resources() to load resources from an MCP server:
from langchain_mcp_adapters.client import MultiServerMCPClientclient = MultiServerMCPClient({...})# Load all resources from a serverblobs = await client.get_resources("server_name")# Or load specific resources by URIblobs = await client.get_resources("server_name", uris=["file:///path/to/file.txt"])for blob in blobs: print(f"URI: {blob.metadata['uri']}, MIME type: {blob.mimetype}") print(blob.as_string()) # For text content
You can also use load_mcp_resources directly with a session for more control:
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_mcp_adapters.resources import load_mcp_resourcesclient = MultiServerMCPClient({...})async with client.session("server_name") as session: # Load all resources blobs = await load_mcp_resources(session) # Or load specific resources by URI blobs = await load_mcp_resources(session, uris=["file:///path/to/file.txt"])
Prompts allow MCP servers to expose reusable prompt templates that can be retrieved and used by clients. LangChain converts MCP prompts into messages, making them easy to integrate into chat-based workflows.
Use client.get_prompt() to load a prompt from an MCP server:
from langchain_mcp_adapters.client import MultiServerMCPClientclient = MultiServerMCPClient({...})# Load a prompt by namemessages = await client.get_prompt("server_name", "summarize")# Load a prompt with argumentsmessages = await client.get_prompt( "server_name", "code_review", arguments={"language": "python", "focus": "security"})# Use the messages in your workflowfor message in messages: print(f"{message.type}: {message.content}")
You can also use load_mcp_prompt directly with a session for more control:
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_mcp_adapters.prompts import load_mcp_promptclient = MultiServerMCPClient({...})async with client.session("server_name") as session: # Load a prompt by name messages = await load_mcp_prompt(session, "summarize") # Load a prompt with arguments messages = await load_mcp_prompt( session, "code_review", arguments={"language": "python", "focus": "security"} )
MCP servers run as separate processes—they can’t access LangGraph runtime information like the store, context, or agent state. Interceptors bridge this gap by giving you access to this runtime context during MCP tool execution.Interceptors also provide middleware-like control over tool calls: you can modify requests, implement retries, add headers dynamically, or short-circuit execution entirely.
When MCP tools are used within a LangChain agent (via create_agent), interceptors receive access to the ToolRuntime context. This provides access to the tool call ID, state, config, and store—enabling powerful patterns for accessing user data, persisting information, and controlling agent behavior.
Runtime context
Store
State
Tool call ID
Access user-specific configuration like user IDs, API keys, or permissions that are passed at invocation time:
Interceptors can return Command objects to update agent state or control graph execution flow. This is useful for tracking task progress, switching between agents, or ending execution early.
Mark task complete and switch agents
from langchain.agents import AgentState, create_agentfrom langchain_mcp_adapters.interceptors import MCPToolCallRequestfrom langchain.messages import ToolMessagefrom langgraph.types import Commandasync def handle_task_completion( request: MCPToolCallRequest, handler,): """Mark task complete and hand off to summary agent.""" result = await handler(request) if request.name == "submit_order": return Command( update={ "messages": [result] if isinstance(result, ToolMessage) else [], "task_status": "completed", }, goto="summary_agent", ) return result
Use Command with goto="__end__" to end execution early:
End agent run on completion
async def end_on_success( request: MCPToolCallRequest, handler,): """End agent run when task is marked complete.""" result = await handler(request) if request.name == "mark_complete": return Command( update={"messages": [result], "status": "done"}, goto="__end__", ) return result
Interceptors are async functions that wrap tool execution, enabling request/response modification, retry logic, and other cross-cutting concerns. They follow an “onion” pattern where the first interceptor in the list is the outermost layer.Basic patternAn interceptor is an async function that receives a request and a handler. You can modify the request before calling the handler, modify the response after, or skip the handler entirely.
Basic interceptor pattern
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_mcp_adapters.interceptors import MCPToolCallRequestasync def logging_interceptor( request: MCPToolCallRequest, handler,): """Log tool calls before and after execution.""" print(f"Calling tool: {request.name} with args: {request.args}") result = await handler(request) print(f"Tool {request.name} returned: {result}") return resultclient = MultiServerMCPClient( {"math": {"transport": "stdio", "command": "python", "args": ["/path/to/server.py"]}}, tool_interceptors=[logging_interceptor],)
Modifying requestsUse request.override() to create a modified request. This follows an immutable pattern, leaving the original request unchanged.
Modifying tool arguments
async def double_args_interceptor( request: MCPToolCallRequest, handler,): """Double all numeric arguments before execution.""" modified_args = {k: v * 2 for k, v in request.args.items()} modified_request = request.override(args=modified_args) return await handler(modified_request)# Original call: add(a=2, b=3) becomes add(a=4, b=6)
Modifying headers at runtimeInterceptors can modify HTTP headers dynamically based on the request context:
Dynamic header modification
async def auth_header_interceptor( request: MCPToolCallRequest, handler,): """Add authentication headers based on the tool being called.""" token = get_token_for_tool(request.name) modified_request = request.override( headers={"Authorization": f"Bearer {token}"} ) return await handler(modified_request)
Composing interceptorsMultiple interceptors compose in “onion” order—the first interceptor in the list is the outermost layer:
Composing multiple interceptors
async def outer_interceptor(request, handler): print("outer: before") result = await handler(request) print("outer: after") return resultasync def inner_interceptor(request, handler): print("inner: before") result = await handler(request) print("inner: after") return resultclient = MultiServerMCPClient( {...}, tool_interceptors=[outer_interceptor, inner_interceptor],)# Execution order:# outer: before -> inner: before -> tool execution -> inner: after -> outer: after
Error handlingUse interceptors to catch tool execution errors and implement retry logic:
Retry on error
import asyncioasync def retry_interceptor( request: MCPToolCallRequest, handler, max_retries: int = 3, delay: float = 1.0,): """Retry failed tool calls with exponential backoff.""" last_error = None for attempt in range(max_retries): try: return await handler(request) except Exception as e: last_error = e if attempt < max_retries - 1: wait_time = delay * (2 ** attempt) # Exponential backoff print(f"Tool {request.name} failed (attempt {attempt + 1}), retrying in {wait_time}s...") await asyncio.sleep(wait_time) raise last_errorclient = MultiServerMCPClient( {...}, tool_interceptors=[retry_interceptor],)
You can also catch specific error types and return fallback values:
Error handling with fallback
async def fallback_interceptor( request: MCPToolCallRequest, handler,): """Return a fallback value if tool execution fails.""" try: return await handler(request) except TimeoutError: return f"Tool {request.name} timed out. Please try again later." except ConnectionError: return f"Could not connect to {request.name} service. Using cached data."
Elicitation allows MCP servers to request additional input from users during tool execution. Instead of requiring all inputs upfront, servers can interactively ask for information as needed.
The elicitation callback can return one of three actions:
Action
Description
accept
User provided valid input. Include the data in the content field.
decline
User chose not to provide the requested information.
cancel
User cancelled the operation entirely.
Response action examples
# Accept with dataElicitResult(action="accept", content={"email": "user@example.com", "age": 25})# Decline (user doesn't want to provide info)ElicitResult(action="decline")# Cancel (abort the operation)ElicitResult(action="cancel")