LangGraph provides both low-level primitives and high-level prebuilt components for building agent-based applications. This section focuses on the prebuilt, ready-to-use components designed to help you construct agentic systems quickly and reliably—without the need to implement orchestration, memory, or human feedback handling from scratch.

What is an agent?

An agent consists of three components: a large language model (LLM), a set of tools it can use, and a prompt that provides instructions. The LLM operates in a loop. In each iteration, it selects a tool to invoke, provides input, receives the result (an observation), and uses that observation to inform the next action. The loop continues until a stopping condition is met — typically when the agent has gathered enough information to respond to the user. Agent loop: the LLM selects tools and uses their outputs to fulfill a user request

Key features

LangGraph includes several capabilities essential for building robust, production-ready agentic systems:
  • Memory integration: Native support for short-term (session-based) and long-term (persistent across sessions) memory, enabling stateful behaviors in chatbots and assistants.
  • Human-in-the-loop control: Execution can pause indefinitely to await human feedback—unlike websocket-based solutions limited to real-time interaction. This enables asynchronous approval, correction, or intervention at any point in the workflow.
  • Streaming support: Real-time streaming of agent state, model tokens, tool outputs, or combined streams.
  • Deployment tooling: Includes infrastructure-free deployment tools. LangGraph Platform supports testing, debugging, and deployment.

Package ecosystem

The high-level, prebuilt components are organized into several packages, each with a specific focus:
PackageDescriptionInstallation
langgraphPrebuilt components to create agentsnpm install @langchain/langgraph @langchain/core
langgraph-supervisorTools for building supervisor agentsnpm install @langchain/langgraph-supervisor
langgraph-swarmTools for building a swarm multi-agent systemnpm install @langchain/langgraph-swarm
langchain-mcp-adaptersInterfaces to MCP servers for tool and resource integrationnpm install @langchain/mcp-adapters
agentevalsUtilities to evaluate agent performancenpm install agentevals

Set up

You can use any chat model that supports structured outputs and tool calling. Below, we show the process of installing the packages, setting API keys, and testing structured outputs / tool calling for Anthropic. Install dependencies
npm install @langchain/core @langchain/anthropic @langchain/langgraph
Initialize an LLM
import { ChatAnthropic } from "@langchain/anthropic";

process.env.ANTHROPIC_API_KEY = "YOUR_API_KEY";

const llm = new ChatAnthropic({ model: "claude-3-5-sonnet-latest" });