To handle queries that your chatbot can’t answer “from memory”, integrate a web search tool. The chatbot can use this tool to find relevant information and provide better responses.
This tutorial builds on Build a basic chatbot.

Prerequisites

Before you start this tutorial, ensure you have the following:

1. Install the search engine

Install the requirements to use the Tavily Search Engine:
npm install @langchain/tavily

2. Configure your environment

Configure your environment with your search engine API key:
process.env.TAVILY_API_KEY = "tvly-...";

3. Define the tool

Define the web search tool:
import { TavilySearch } from "@langchain/tavily";

const tool = new TavilySearch({ maxResults: 2 });
const tools = [tool];

await tool.invoke({ query: "What's a 'node' in LangGraph?" });
The results are page summaries our chat bot can use to answer questions:
{
  "query": "What's a 'node' in LangGraph?",
  "follow_up_questions": null,
  "answer": null,
  "images": [],
  "results": [
    {
      "url": "https://blog.langchain.dev/langgraph/",
      "title": "LangGraph - LangChain Blog",
      "content": "TL;DR: LangGraph is module built on top of LangChain to better enable creation of cyclical graphs, often needed for agent runtimes. This state is updated by nodes in the graph, which return operations to attributes of this state (in the form of a key-value store). After adding nodes, you can then add edges to create the graph. An example of this may be in the basic agent runtime, where we always want the model to be called after we call a tool. The state of this graph by default contains concepts that should be familiar to you if you've used LangChain agents: `input`, `chat_history`, `intermediate_steps` (and `agent_outcome` to represent the most recent agent outcome)",
      "score": 0.7407191,
      "raw_content": null
    },
    {
      "url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141",
      "title": "Introduction to LangGraph: A Beginner's Guide - Medium",
      "content": "*   **Stateful Graph:** LangGraph revolves around the concept of a stateful graph, where each node in the graph represents a step in your computation, and the graph maintains a state that is passed around and updated as the computation progresses. LangGraph supports conditional edges, allowing you to dynamically determine the next node to execute based on the current state of the graph. Image 10: Introduction to AI Agent with LangChain and LangGraph: A Beginner’s Guide Image 18: How to build LLM Agent with LangGraph — StateGraph and Reducer Image 20: Simplest Graphs using LangGraph Framework Image 24: Building a ReAct Agent with Langgraph: A Step-by-Step Guide Image 28: Building an Agentic RAG with LangGraph: A Step-by-Step Guide",
      "score": 0.65279555,
      "raw_content": null
    }
  ],
  "response_time": 1.34
}

4. Define the graph

For the StateGraph you created in the first tutorial, add bindTools on the LLM. This lets the LLM know the correct JSON format to use if it wants to use the search engine. Let’s first select our LLM:
import { ChatAnthropic } from "@langchain/anthropic";

const llm = new ChatAnthropic({ model: "claude-3-5-sonnet-latest" });
We can now incorporate it into a StateGraph:
import { StateGraph, MessagesZodState } from "@langchain/langgraph";
import { z } from "zod";

const State = z.object({ messages: MessagesZodState.shape.messages });

const chatbot = async (state: z.infer<typeof State>) => {
  // Modification: tell the LLM which tools it can call
  const llmWithTools = llm.bindTools(tools);

  return { messages: [await llmWithTools.invoke(state.messages)] };
};

5. Create a function to run the tools

Now, create a function to run the tools if they are called. Do this by adding the tools to a new node called "tools" that checks the most recent message in the state and calls tools if the message contains tool_calls. It relies on the LLM’s tool calling support, which is available in Anthropic, OpenAI, Google Gemini, and a number of other LLM providers.
import type { StructuredToolInterface } from "@langchain/core/tools";
import { isAIMessage, ToolMessage } from "@langchain/core/messages";

function createToolNode(tools: StructuredToolInterface[]) {
  const toolByName: Record<string, StructuredToolInterface> = {};
  for (const tool of tools) {
    toolByName[tool.name] = tool;
  }

  return async (inputs: z.infer<typeof State>) => {
    const { messages } = inputs;
    if (!messages || messages.length === 0) {
      throw new Error("No message found in input");
    }

    const message = messages.at(-1);
    if (!message || !isAIMessage(message) || !message.tool_calls) {
      throw new Error("Last message is not an AI message with tool calls");
    }

    const outputs: ToolMessage[] = [];
    for (const toolCall of message.tool_calls) {
      if (!toolCall.id) throw new Error("Tool call ID is required");

      const tool = toolByName[toolCall.name];
      if (!tool) throw new Error(`Tool ${toolCall.name} not found`);

      const result = await tool.invoke(toolCall.args);

      outputs.push(
        new ToolMessage({
          content: JSON.stringify(result),
          name: toolCall.name,
          tool_call_id: toolCall.id,
        })
      );
    }

    return { messages: outputs };
  };
}
If you do not want to build this yourself in the future, you can use LangGraph’s prebuilt ToolNode.

6. Define the conditional_edges

With the tool node added, now you can define the conditional_edges. Edges route the control flow from one node to the next. Conditional edges start from a single node and usually contain “if” statements to route to different nodes depending on the current graph state. These functions receive the current graph state and return a string or list of strings indicating which node(s) to call next. Next, define a router function called routeTools that checks for tool_calls in the chatbot’s output. Provide this function to the graph by calling addConditionalEdges, which tells the graph that whenever the chatbot node completes to check this function to see where to go next. The condition will route to tools if tool calls are present and END if not. Because the condition can return END, you do not need to explicitly set a finish_point this time.
import { END, START } from "@langchain/langgraph";

const routeTools = (state: z.infer<typeof State>) => {
  /**
   * Use as conditional edge to route to the ToolNode if the last message
   * has tool calls.
   */
  const lastMessage = state.messages.at(-1);
  if (
    lastMessage &&
    isAIMessage(lastMessage) &&
    lastMessage.tool_calls?.length
  ) {
    return "tools";
  }

  /** Otherwise, route to the end. */
  return END;
};

const graph = new StateGraph(State)
  .addNode("chatbot", chatbot)

  // The `routeTools` function returns "tools" if the chatbot asks to use a tool, and "END" if
  // it is fine directly responding. This conditional routing defines the main agent loop.
  .addNode("tools", createToolNode(tools))

  // Start the graph with the chatbot
  .addEdge(START, "chatbot")

  // The `routeTools` function returns "tools" if the chatbot asks to use a tool, and "END" if
  // it is fine directly responding.
  .addConditionalEdges("chatbot", routeTools, ["tools", END])

  // Any time a tool is called, we need to return to the chatbot
  .addEdge("tools", "chatbot")
  .compile();
You can replace this with the prebuilt toolsCondition to be more concise.

7. Visualize the graph

You can visualize the graph using the getGraph method and render the graph with the drawMermaidPng method.
import * as fs from "node:fs/promises";

const drawableGraph = await graph.getGraphAsync();
const image = await drawableGraph.drawMermaidPng();
const imageBuffer = new Uint8Array(await image.arrayBuffer());

await fs.writeFile("chatbot-with-tools.png", imageBuffer);
chatbot-with-tools-diagram

8. Ask the bot questions

Now you can ask the chatbot questions outside its training data:
import readline from "node:readline/promises";

const prompt = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function generateText(content: string) {
  const stream = await graph.stream(
    { messages: [{ type: "human", content }] },
    { streamMode: "values" }
  );

  for await (const event of stream) {
    const lastMessage = event.messages.at(-1);

    if (lastMessage?.getType() === "ai" || lastMessage?.getType() === "tool") {
      console.log(`Assistant: ${lastMessage?.text}`);
    }
  }
}

while (true) {
  const human = await prompt.question("User: ");
  if (["quit", "exit", "q"].includes(human.trim())) break;
  await generateText(human || "What do you know about LangGraph?");
}

prompt.close();
User: What do you know about LangGraph?
Assistant: I'll search for the latest information about LangGraph for you.
Assistant: [{"title":"Introduction to LangGraph: A Beginner's Guide - Medium","url":"https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141","content":"..."}]
Assistant: Based on the search results, I can provide you with information about LangGraph:

LangGraph is a library within the LangChain ecosystem designed for building stateful, multi-actor applications with Large Language Models (LLMs). Here are the key aspects:

**Core Purpose:**
- LangGraph is specifically designed for creating agent and multi-agent workflows
- It provides a framework for defining, coordinating, and executing multiple LLM agents in a structured manner

**Key Features:**
1. **Stateful Graph Architecture**: LangGraph revolves around a stateful graph where each node represents a step in computation, and the graph maintains state that is passed around and updated as the computation progresses

2. **Conditional Edges**: It supports conditional edges, allowing you to dynamically determine the next node to execute based on the current state of the graph

3. **Cycles**: Unlike other LLM frameworks, LangGraph allows you to define flows that involve cycles, which is essential for most agentic architectures

4. **Controllability**: It offers enhanced control over the application flow

5. **Persistence**: The library provides ways to maintain state and persistence in LLM-based applications

**Use Cases:**
- Conversational agents
- Complex task automation
- Custom LLM-backed experiences
- Multi-agent systems that perform complex tasks

**Benefits:**
LangGraph allows developers to focus on the high-level logic of their applications rather than the intricacies of agent coordination, making it easier to build complex, production-ready features with LLMs.

This makes LangGraph a significant tool in the evolving landscape of LLM-based application development.

9. Use prebuilts

For ease of use, adjust your code to replace the following with LangGraph prebuilt components. These have built in functionality like parallel API execution.
  • createToolNode is replaced with the prebuilt ToolNode
  • routeTools is replaced with the prebuilt toolsCondition
import { TavilySearch } from "@langchain/tavily";
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, START, MessagesZodState, END } from "@langchain/langgraph";
import { ToolNode, toolsCondition } from "@langchain/langgraph/prebuilt";
import { z } from "zod";

const State = z.object({ messages: MessagesZodState.shape.messages });

const tools = [new TavilySearch({ maxResults: 2 })];

const llm = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools(tools);

const graph = new StateGraph(State)
  .addNode("chatbot", async (state) => ({
    messages: [await llm.invoke(state.messages)],
  }))
  .addNode("tools", new ToolNode(tools))
  .addConditionalEdges("chatbot", toolsCondition, ["tools", END])
  .addEdge("tools", "chatbot")
  .addEdge(START, "chatbot")
  .compile();
Congratulations! You’ve created a conversational agent in LangGraph that can use a search engine to retrieve updated information when needed. Now it can handle a wider range of user queries.

Next steps

The chatbot cannot remember past interactions on its own, which limits its ability to have coherent, multi-turn conversations. In the next part, you will add memory to address this.