This guide shows you how to set up and use LangGraph’s prebuilt, reusable components, which are designed to help you construct agentic systems quickly and reliably.

Prerequisites

Before you start this tutorial, ensure you have the following:

1. Install dependencies

If you haven’t already, install LangGraph and LangChain:
npm install @langchain/langgraph @langchain/core @langchain/anthropic
LangChain is installed so the agent can call the model.

2. Create an agent

To create an agent, use createReactAgent:
import { ChatAnthropic } from "@langchain/anthropic";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getWeather = tool(
  // (1)!
  async ({ city }) => {
    return `It's always sunny in ${city}!`;
  },
  {
    name: "get_weather",
    description: "Get weather for a given city.",
    schema: z.object({
      city: z.string().describe("The city to get weather for"),
    }),
  }
);

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "anthropic:claude-3-5-sonnet-latest" }), // (2)!
  tools: [getWeather], // (3)!
  stateModifier: "You are a helpful assistant", // (4)!
});

// Run the agent
await agent.invoke({
  messages: [{ role: "user", content: "what is the weather in sf" }],
});
  1. Define a tool for the agent to use. Tools can be defined using the tool function. For more advanced tool usage and customization, check the tools page.
  2. Provide a language model for the agent to use. To learn more about configuring language models for the agents, check the models page.
  3. Provide a list of tools for the model to use.
  4. Provide a system prompt (instructions) to the language model used by the agent.

3. Configure an LLM

To configure an LLM with specific parameters, such as temperature, use a model instance:
import { ChatAnthropic } from "@langchain/anthropic";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

// highlight-next-line
const model = new ChatAnthropic({
  model: "claude-3-5-sonnet-latest",
  // highlight-next-line
  temperature: 0,
});

const agent = createReactAgent({
  // highlight-next-line
  llm: model,
  tools: [getWeather],
});
For more information on how to configure LLMs, see Models.

4. Add a custom prompt

Prompts instruct the LLM how to behave. Add one of the following types of prompts:
  • Static: A string is interpreted as a system message.
  • Dynamic: A list of messages generated at runtime, based on input or configuration.
Define a fixed prompt string or list of messages:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatAnthropic } from "@langchain/anthropic";

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "anthropic:claude-3-5-sonnet-latest" }),
  tools: [getWeather],
  // A static prompt that never changes
  // highlight-next-line
  stateModifier: "Never answer questions about the weather."
});

await agent.invoke({
  messages: [{ role: "user", content: "what is the weather in sf" }]
});
For more information, see Context.

5. Add memory

To allow multi-turn conversations with an agent, you need to enable persistence by providing a checkpointer when creating an agent. At runtime, you need to provide a config containing thread_id — a unique identifier for the conversation (session):
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MemorySaver } from "@langchain/langgraph";

// highlight-next-line
const checkpointer = new MemorySaver();

const agent = createReactAgent({
  llm: "anthropic:claude-3-5-sonnet-latest",
  tools: [getWeather],
  // highlight-next-line
  checkpointSaver: checkpointer, // (1)!
});

// Run the agent
// highlight-next-line
const config = { configurable: { thread_id: "1" } };
const sfResponse = await agent.invoke(
  { messages: [{ role: "user", content: "what is the weather in sf" }] },
  // highlight-next-line
  config // (2)!
);
const nyResponse = await agent.invoke(
  { messages: [{ role: "user", content: "what about new york?" }] },
  // highlight-next-line
  config
);
  1. checkpointSaver allows the agent to store its state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities.
  2. Pass configuration with thread_id to be able to resume the same conversation on future agent invocations.
When you enable the checkpointer, it stores agent state at every step in the provided checkpointer database (or in memory, if using MemorySaver). Note that in the above example, when the agent is invoked the second time with the same thread_id, the original message history from the first conversation is automatically included, together with the new user input. For more information, see Memory.

6. Configure structured output

To produce structured responses conforming to a schema, use the responseFormat parameter. The schema can be defined with a Zod schema. The result will be accessible via the structuredResponse field.
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const WeatherResponse = z.object({
  conditions: z.string(),
});

const agent = createReactAgent({
  llm: "anthropic:claude-3-5-sonnet-latest",
  tools: [getWeather],
  // highlight-next-line
  responseFormat: WeatherResponse, // (1)!
});

const response = await agent.invoke({
  messages: [{ role: "user", content: "what is the weather in sf" }],
});

// highlight-next-line
response.structuredResponse;
  1. When responseFormat is provided, a separate step is added at the end of the agent loop: agent message history is passed to an LLM with structured output to generate a structured response. To provide a system prompt to this LLM, use an object { prompt, schema }, e.g., responseFormat: { prompt, schema: WeatherResponse }.
LLM post-processing Structured output requires an additional call to the LLM to format the response according to the schema.

Next steps