LangGraph provides both low-level primitives and high-level prebuilt components for building agent-based applications. This section focuses on the prebuilt, ready-to-use components designed to help you construct agentic systems quickly and reliably—without the need to implement orchestration, memory, or human feedback handling from scratch.

What is an agent?

An agent consists of three components: a large language model (LLM), a set of tools it can use, and a prompt that provides instructions. The LLM operates in a loop. In each iteration, it selects a tool to invoke, provides input, receives the result (an observation), and uses that observation to inform the next action. The loop continues until a stopping condition is met — typically when the agent has gathered enough information to respond to the user. Agent loop: the LLM selects tools and uses their outputs to fulfill a user request

Key features

LangGraph includes several capabilities essential for building robust, production-ready agentic systems:
  • Memory integration: Native support for short-term (session-based) and long-term (persistent across sessions) memory, enabling stateful behaviors in chatbots and assistants.
  • Human-in-the-loop control: Execution can pause indefinitely to await human feedback—unlike websocket-based solutions limited to real-time interaction. This enables asynchronous approval, correction, or intervention at any point in the workflow.
  • Streaming support: Real-time streaming of agent state, model tokens, tool outputs, or combined streams.
  • Deployment tooling: Includes infrastructure-free deployment tools. LangGraph Platform supports testing, debugging, and deployment.

Package ecosystem

The high-level, prebuilt components are organized into several packages, each with a specific focus:
PackageDescriptionInstallation
langgraph-prebuilt (part of langgraph)Prebuilt components to create agentspip install -U langgraph langchain
langgraph-supervisorTools for building supervisor agentspip install -U langgraph-supervisor
langgraph-swarmTools for building a swarm multi-agent systempip install -U langgraph-swarm
langchain-mcp-adaptersInterfaces to MCP servers for tool and resource integrationpip install -U langchain-mcp-adapters
langmemAgent memory management: short-term and long-termpip install -U langmem
agentevalsUtilities to evaluate agent performancepip install -U agentevals

Set up

You can use any chat model that supports structured outputs and tool calling. Below, we show the process of installing the packages, setting API keys, and testing structured outputs / tool calling for Anthropic. Install dependencies
pip install langchain_core langchain-anthropic langgraph
Initialize an LLM
import os
import getpass

from langchain_anthropic import ChatAnthropic

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("ANTHROPIC_API_KEY")

llm = ChatAnthropic(model="claude-3-5-sonnet-latest")