> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Streaming

<Info>
  **In preview:** Try event streaming typed projections over messages, state, subgraphs, output, and custom extensions. Start with the [Event Streaming summary](/oss/javascript/langgraph/streaming/event-streaming), or explore runnable examples in the [streaming cookbook](https://github.com/langchain-ai/streaming-cookbook).
</Info>

LangGraph implements a streaming system to surface real-time updates. Streaming is crucial for enhancing the responsiveness of applications built on LLMs. By displaying output progressively, even before a complete response is ready, streaming significantly improves user experience (UX), particularly when dealing with the latency of LLMs.

<Tip>
  Debug streaming events, inspect token-by-token LLM output, and monitor latency with [LangSmith](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=oss-langgraph-streaming). Follow the [tracing quickstart](/langsmith/trace-with-langgraph) to get set up.
</Tip>

## Get started

### Basic usage

LangGraph graphs expose the [`stream`](https://reference.langchain.com/javascript/classes/_langchain_langgraph.pregel.Pregel.html#stream) method to yield streamed outputs as iterators.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
for await (const chunk of await graph.stream(inputs, {
  streamMode: "updates",
})) {
  console.log(chunk);
}
```

## Stream modes

Pass one or more of the following stream modes as a list to the [`stream`](https://reference.langchain.com/javascript/classes/_langchain_langgraph.index.CompiledStateGraph.html#stream) method:

| Mode                    | Description                                                                                    |
| :---------------------- | :--------------------------------------------------------------------------------------------- |
| [values](#graph-state)  | Full state after each step.                                                                    |
| [updates](#graph-state) | State updates after each step. Multiple updates in the same step are streamed separately.      |
| [messages](#llm-tokens) | 2-tuples of (LLM token, metadata) from LLM calls.                                              |
| [custom](#custom-data)  | Custom data emitted from nodes via the `writer` config parameter.                              |
| [tools](#tool-progress) | Tool-call lifecycle events (`on_tool_start`, `on_tool_event`, `on_tool_end`, `on_tool_error`). |
| [debug](#debug)         | All available info throughout graph execution.                                                 |

<a id="messages" />

### Graph state

Use the stream modes `updates` and `values` to stream the state of the graph as it executes.

* `updates` streams the **updates** to the state after each step of the graph.
* `values` streams the **full value** of the state after each step of the graph.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { StateGraph, StateSchema, START, END } from "@langchain/langgraph";
import { z } from "zod/v4";

const State = new StateSchema({
  topic: z.string(),
  joke: z.string(),
});

const graph = new StateGraph(State)
  .addNode("refineTopic", (state) => {
    return { topic: state.topic + " and cats" };
  })
  .addNode("generateJoke", (state) => {
    return { joke: `This is a joke about ${state.topic}` };
  })
  .addEdge(START, "refineTopic")
  .addEdge("refineTopic", "generateJoke")
  .addEdge("generateJoke", END)
  .compile();
```

<Tabs>
  <Tab title="updates">
    Use this to stream only the **state updates** returned by the nodes after each step. The streamed outputs include the name of the node as well as the update.

    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    for await (const chunk of await graph.stream(
      { topic: "ice cream" },
      { streamMode: "updates" }
    )) {
      for (const [nodeName, state] of Object.entries(chunk)) {
        console.log(`Node ${nodeName} updated:`, state);
      }
    }
    ```
  </Tab>

  <Tab title="values">
    Use this to stream the **full state** of the graph after each step.

    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    for await (const chunk of await graph.stream(
      { topic: "ice cream" },
      { streamMode: "values" }
    )) {
      console.log(`topic: ${chunk.topic}, joke: ${chunk.joke}`);
    }
    ```
  </Tab>
</Tabs>

### LLM tokens

Use the `messages` streaming mode to stream Large Language Model (LLM) outputs **token by token** from any part of your graph, including nodes, tools, subgraphs, or tasks.

The streamed output from [`messages` mode](#stream-modes) is a tuple `[message_chunk, metadata]` where:

* `message_chunk`: the token or message segment from the LLM.
* `metadata`: a dictionary containing details about the graph node and LLM invocation.

> If your LLM is not available as a LangChain integration, you can stream its outputs using `custom` mode instead. See [use with any LLM](#use-with-any-llm) for details.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, StateSchema, GraphNode, START } from "@langchain/langgraph";
import * as z from "zod";

const MyState = new StateSchema({
  topic: z.string(),
  joke: z.string().default(""),
});

const model = new ChatOpenAI({ model: "gpt-5.4-mini" });

const callModel: GraphNode<typeof MyState> = async (state) => {
  // Call the LLM to generate a joke about a topic
  // Note that message events are emitted even when the LLM is run using .invoke rather than .stream
  const modelResponse = await model.invoke([
    { role: "user", content: `Generate a joke about ${state.topic}` },
  ]);
  return { joke: modelResponse.content };
};

const graph = new StateGraph(MyState)
  .addNode("callModel", callModel)
  .addEdge(START, "callModel")
  .compile();

// The "messages" stream mode returns an iterator of tuples [messageChunk, metadata]
// where messageChunk is the token streamed by the LLM and metadata is a dictionary
// with information about the graph node where the LLM was called and other information
for await (const [messageChunk, metadata] of await graph.stream(
  { topic: "ice cream" },
  { streamMode: "messages" }
)) {
  if (messageChunk.content) {
    console.log(messageChunk.content + "|");
  }
}
```

#### Filter by LLM invocation

You can associate `tags` with LLM invocations to filter the streamed tokens by LLM invocation.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { ChatOpenAI } from "@langchain/openai";

// model1 is tagged with "joke"
const model1 = new ChatOpenAI({
  model: "gpt-5.4-mini",
  tags: ['joke']
});
// model2 is tagged with "poem"
const model2 = new ChatOpenAI({
  model: "gpt-5.4-mini",
  tags: ['poem']
});

const graph = // ... define a graph that uses these LLMs

// The streamMode is set to "messages" to stream LLM tokens
// The metadata contains information about the LLM invocation, including the tags
for await (const [msg, metadata] of await graph.stream(
  { topic: "cats" },
  { streamMode: "messages" }
)) {
  // Filter the streamed tokens by the tags field in the metadata to only include
  // the tokens from the LLM invocation with the "joke" tag
  if (metadata.tags?.includes("joke")) {
    console.log(msg.content + "|");
  }
}
```

<Accordion title="Extended example: filtering by tags">
  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { ChatOpenAI } from "@langchain/openai";
  import { StateGraph, StateSchema, GraphNode, START } from "@langchain/langgraph";
  import * as z from "zod";

  // The jokeModel is tagged with "joke"
  const jokeModel = new ChatOpenAI({
    model: "gpt-5.4-mini",
    tags: ["joke"]
  });
  // The poemModel is tagged with "poem"
  const poemModel = new ChatOpenAI({
    model: "gpt-5.4-mini",
    tags: ["poem"]
  });

  const State = new StateSchema({
    topic: z.string(),
    joke: z.string(),
    poem: z.string(),
  });

  const callModel: GraphNode<typeof State> = async (state) => {
    const topic = state.topic;
    console.log("Writing joke...");

    const jokeResponse = await jokeModel.invoke([
      { role: "user", content: `Write a joke about ${topic}` }
    ]);

    console.log("\n\nWriting poem...");
    const poemResponse = await poemModel.invoke([
      { role: "user", content: `Write a short poem about ${topic}` }
    ]);

    return {
      joke: jokeResponse.content,
      poem: poemResponse.content
    };
  };

  const graph = new StateGraph(State)
    .addNode("callModel", callModel)
    .addEdge(START, "callModel")
    .compile();

  // The streamMode is set to "messages" to stream LLM tokens
  // The metadata contains information about the LLM invocation, including the tags
  for await (const [msg, metadata] of await graph.stream(
    { topic: "cats" },
    { streamMode: "messages" }
  )) {
    // Filter the streamed tokens by the tags field in the metadata to only include
    // the tokens from the LLM invocation with the "joke" tag
    if (metadata.tags?.includes("joke")) {
      console.log(msg.content + "|");
    }
  }
  ```
</Accordion>

#### Omit messages from the stream

Use the `nostream` tag to exclude LLM output from the stream entirely. Invocations tagged with `nostream` still run and produce output; their tokens are simply not emitted in `messages` mode.

This is useful when:

* You need LLM output for internal processing (for example structured output) but do not want to stream it to the client
* You stream the same content through a different channel (for example custom UI messages) and want to avoid duplicate output in the `messages` stream

```ts theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { ChatAnthropic } from "@langchain/anthropic";
import { StateGraph, StateSchema, START } from "@langchain/langgraph";
import * as z from "zod";

const streamModel = new ChatAnthropic({ model: "claude-haiku-4-5-20251001" });
const internalModel = new ChatAnthropic({
  model: "claude-haiku-4-5-20251001",
}).withConfig({
  tags: ["nostream"],
});

const State = new StateSchema({
  topic: z.string(),
  answer: z.string().optional(),
  notes: z.string().optional(),
});

const writeAnswer = async (state: typeof State.State) => {
  const r = await streamModel.invoke([
    { role: "user", content: `Reply briefly about ${state.topic}` },
  ]);
  return { answer: r.content };
};

const internalNotes = async (state: typeof State.State) => {
  // Tokens from this model are omitted from streamMode: "messages" because of nostream
  const r = await internalModel.invoke([
    { role: "user", content: `Private notes on ${state.topic}` },
  ]);
  return { notes: r.content };
};

const graph = new StateGraph(State)
  .addNode("writeAnswer", writeAnswer)
  .addNode("internal_notes", internalNotes)
  .addEdge(START, "writeAnswer")
  .addEdge("writeAnswer", "internal_notes")
  .compile();

const stream = await graph.stream(
  { topic: "AI", answer: "", notes: "" },
  { streamMode: "messages" },
);
```

#### Filter by node

To stream tokens only from specific nodes, use `stream_mode="messages"` and filter the outputs by the `langgraph_node` field in the streamed metadata:

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
// The "messages" stream mode returns a tuple of [messageChunk, metadata]
// where messageChunk is the token streamed by the LLM and metadata is a dictionary
// with information about the graph node where the LLM was called and other information
for await (const [msg, metadata] of await graph.stream(
  inputs,
  { streamMode: "messages" }
)) {
  // Filter the streamed tokens by the langgraph_node field in the metadata
  // to only include the tokens from the specified node
  if (msg.content && metadata.langgraph_node === "some_node_name") {
    // ...
  }
}
```

<Accordion title="Extended example: streaming LLM tokens from specific nodes">
  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { ChatOpenAI } from "@langchain/openai";
  import { StateGraph, StateSchema, GraphNode, START } from "@langchain/langgraph";
  import * as z from "zod";

  const model = new ChatOpenAI({ model: "gpt-5.4-mini" });

  const State = new StateSchema({
    topic: z.string(),
    joke: z.string(),
    poem: z.string(),
  });

  const writeJoke: GraphNode<typeof State> = async (state) => {
    const topic = state.topic;
    const jokeResponse = await model.invoke([
      { role: "user", content: `Write a joke about ${topic}` }
    ]);
    return { joke: jokeResponse.content };
  };

  const writePoem: GraphNode<typeof State> = async (state) => {
    const topic = state.topic;
    const poemResponse = await model.invoke([
      { role: "user", content: `Write a short poem about ${topic}` }
    ]);
    return { poem: poemResponse.content };
  };

  const graph = new StateGraph(State)
    .addNode("writeJoke", writeJoke)
    .addNode("writePoem", writePoem)
    // write both the joke and the poem concurrently
    .addEdge(START, "writeJoke")
    .addEdge(START, "writePoem")
    .compile();

  // The "messages" stream mode returns a tuple of [messageChunk, metadata]
  // where messageChunk is the token streamed by the LLM and metadata is a dictionary
  // with information about the graph node where the LLM was called and other information
  for await (const [msg, metadata] of await graph.stream(
    { topic: "cats" },
    { streamMode: "messages" }
  )) {
    // Filter the streamed tokens by the langgraph_node field in the metadata
    // to only include the tokens from the writePoem node
    if (msg.content && metadata.langgraph_node === "writePoem") {
      console.log(msg.content + "|");
    }
  }
  ```
</Accordion>

### Custom data

To send **custom user-defined data** from inside a LangGraph node or tool, follow these steps:

1. Use the `writer` parameter from the `LangGraphRunnableConfig` to emit custom data.
2. Set `streamMode: "custom"` when calling `.stream()` to get the custom data in the stream. You can combine multiple modes (e.g., `["updates", "custom"]`), but at least one must be `"custom"`.

<Tabs>
  <Tab title="node">
    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import { StateGraph, StateSchema, GraphNode, START, LangGraphRunnableConfig } from "@langchain/langgraph";
    import * as z from "zod";

    const State = new StateSchema({
      query: z.string(),
      answer: z.string(),
    });

    const node: GraphNode<typeof State> = async (state, config) => {
      // Use the writer to emit a custom key-value pair (e.g., progress update)
      config.writer({ custom_key: "Generating custom data inside node" });
      return { answer: "some data" };
    };

    const graph = new StateGraph(State)
      .addNode("node", node)
      .addEdge(START, "node")
      .compile();

    const inputs = { query: "example" };

    // Set streamMode: "custom" to receive the custom data in the stream
    for await (const chunk of await graph.stream(inputs, { streamMode: "custom" })) {
      console.log(chunk);
    }
    ```
  </Tab>

  <Tab title="tool">
    ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
    import { tool } from "@langchain/core/tools";
    import { LangGraphRunnableConfig } from "@langchain/langgraph";
    import * as z from "zod";

    const queryDatabase = tool(
      async (input, config: LangGraphRunnableConfig) => {
        // Use the writer to emit a custom key-value pair (e.g., progress update)
        config.writer({ data: "Retrieved 0/100 records", type: "progress" });
        // perform query
        // Emit another custom key-value pair
        config.writer({ data: "Retrieved 100/100 records", type: "progress" });
        return "some-answer";
      },
      {
        name: "query_database",
        description: "Query the database.",
        schema: z.object({
          query: z.string().describe("The query to execute."),
        }),
      }
    );

    const graph = // ... define a graph that uses this tool

    // Set streamMode: "custom" to receive the custom data in the stream
    for await (const chunk of await graph.stream(inputs, { streamMode: "custom" })) {
      console.log(chunk);
    }
    ```
  </Tab>
</Tabs>

### Tool progress

Use the `tools` stream mode to receive real-time lifecycle events for tool executions. This is useful for showing progress indicators, partial results, and error states in your UI while tools are running.

The `tools` stream mode emits four event types:

| Event           | When                          | Payload                        |
| --------------- | ----------------------------- | ------------------------------ |
| `on_tool_start` | Tool invocation begins        | `name`, `input`, `toolCallId`  |
| `on_tool_event` | Tool yields intermediate data | `name`, `data`, `toolCallId`   |
| `on_tool_end`   | Tool returns its final result | `name`, `output`, `toolCallId` |
| `on_tool_error` | Tool throws an error          | `name`, `error`, `toolCallId`  |

#### Define tools that stream progress

To emit `on_tool_event` events, define your tool function as an **async generator** (`async function*`). Each `yield` sends intermediate data to the stream, and the `return` value is used as the tool's final result.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { tool } from "@langchain/core/tools";
import { z } from "zod/v4";

const searchFlights = tool(
  async function* (input) {
    const airlines = ["United", "Delta", "American", "JetBlue"];
    const completed: string[] = [];

    for (let i = 0; i < airlines.length; i++) {
      await new Promise((r) => setTimeout(r, 500));
      completed.push(airlines[i]);

      // Each yield emits an on_tool_event to the stream
      yield {
        message: `Searching ${airlines[i]}...`,
        progress: (i + 1) / airlines.length,
        completed,
      };
    }

    // The return value becomes the tool result (ToolMessage.content)
    return JSON.stringify({
      flights: [
        { airline: "United", price: 450, duration: "5h 30m" },
        { airline: "Delta", price: 520, duration: "5h 15m" },
      ],
    });
  },
  {
    name: "search_flights",
    description: "Search for available flights to a destination.",
    schema: z.object({
      destination: z.string(),
      date: z.string(),
    }),
  }
);
```

<Note>
  Existing tools that return a `Promise` are fully compatible. They emit `on_tool_start` and `on_tool_end` events but no `on_tool_event` events.
</Note>

#### Consume tool events server-side

Pass `streamMode: ["tools"]` (or combine with other modes) to `graph.stream()`:

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
for await (const [mode, chunk] of await graph.stream(
  { messages: [{ role: "user", content: "Find flights to Tokyo" }] },
  { streamMode: ["updates", "tools"] }
)) {
  if (mode === "tools") {
    switch (chunk.event) {
      case "on_tool_start":
        console.log(`Tool started: ${chunk.name}`, chunk.input);
        break;
      case "on_tool_event":
        console.log(`Tool progress: ${chunk.name}`, chunk.data);
        break;
      case "on_tool_end":
        console.log(`Tool finished: ${chunk.name}`, chunk.output);
        break;
      case "on_tool_error":
        console.error(`Tool failed: ${chunk.name}`, chunk.error);
        break;
    }
  }
}
```

#### Use tool progress in React with `useStream`

The `useStream` hook from `@langchain/langgraph-sdk/react` exposes a `toolProgress` array when you include `"tools"` in your stream modes. Each entry is a `ToolProgress` object that tracks the current state of a running tool:

| Field        | Description                                                                     |
| ------------ | ------------------------------------------------------------------------------- |
| `name`       | The tool name                                                                   |
| `state`      | Current lifecycle state: `"starting"`, `"running"`, `"completed"`, or `"error"` |
| `toolCallId` | The tool call ID from the LLM                                                   |
| `input`      | The tool's input arguments                                                      |
| `data`       | The most recent yielded data from `on_tool_event`                               |
| `result`     | The final result, set on `on_tool_end`                                          |
| `error`      | The error, set on `on_tool_error`                                               |

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { useStream } from "@langchain/langgraph-sdk/react";

function Chat() {
  const stream = useStream({
    assistantId: "my-agent",
    streamMode: ["values", "tools"],
  });

  // Filter for actively running tools
  const activeTools = stream.toolProgress.filter(
    (t) => t.state === "starting" || t.state === "running"
  );

  return (
    <div>
      {stream.messages.map((msg) => (
        <MessageBubble key={msg.id} message={msg} />
      ))}

      {/* Show progress cards for running tools */}
      {activeTools.map((tool) => (
        <ToolProgressCard
          key={tool.toolCallId ?? tool.name}
          name={tool.name}
          state={tool.state}
          data={tool.data}
        />
      ))}
    </div>
  );
}
```

<Accordion title="Extended example: travel planning agent with tool progress">
  This example shows a complete agent with async-generator tools that stream search progress to a React UI.

  **Agent definition:**

  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { tool } from "@langchain/core/tools";
  import { ChatOpenAI } from "@langchain/openai";
  import { createAgent } from "@langchain/langgraph";
  import { MemorySaver } from "@langchain/langgraph-checkpoint-memory";
  import { z } from "zod/v4";

  const searchFlights = tool(
    async function* (input) {
      const airlines = ["United", "Delta", "American", "JetBlue"];
      const completed: string[] = [];

      for (let i = 0; i < airlines.length; i++) {
        await new Promise((r) => setTimeout(r, 600));
        completed.push(`${airlines[i]}: checked`);
        yield {
          message: `Searching ${airlines[i]}...`,
          progress: (i + 1) / airlines.length,
          completed,
        };
      }

      return JSON.stringify({
        flights: [
          { airline: "United", price: 450, duration: "5h 30m" },
          { airline: "Delta", price: 520, duration: "5h 15m" },
        ],
      });
    },
    {
      name: "search_flights",
      description: "Search for available flights.",
      schema: z.object({
        destination: z.string(),
        departure_date: z.string(),
      }),
    }
  );

  const checkHotels = tool(
    async function* (input) {
      const hotels = ["Grand Hyatt", "Marriott", "Hilton"];
      const completed: string[] = [];

      for (let i = 0; i < hotels.length; i++) {
        await new Promise((r) => setTimeout(r, 400));
        completed.push(`${hotels[i]}: available`);
        yield {
          message: `Checking ${hotels[i]}...`,
          progress: (i + 1) / hotels.length,
          completed,
        };
      }

      return JSON.stringify({
        hotels: [
          { name: "Grand Hyatt", price: 250, rating: 4.5 },
          { name: "Marriott", price: 180, rating: 4.2 },
        ],
      });
    },
    {
      name: "check_hotels",
      description: "Check hotel availability.",
      schema: z.object({
        city: z.string(),
        check_in: z.string(),
        nights: z.number(),
      }),
    }
  );

  export const agent = createAgent({
    model: new ChatOpenAI({ model: "gpt-4o-mini" }),
    tools: [searchFlights, checkHotels],
    checkpointer: new MemorySaver(),
  });
  ```

  **React component with progress cards:**

  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { useStream } from "@langchain/langgraph-sdk/react";

  function TravelPlanner() {
    const stream = useStream<typeof agent>({
      assistantId: "travel-agent",
      streamMode: ["values", "tools"],
    });

    const activeTools = stream.toolProgress.filter(
      (t) => t.state === "starting" || t.state === "running"
    );

    return (
      <div>
        {stream.messages.map((msg) => (
          <div key={msg.id}>{msg.content}</div>
        ))}

        {activeTools.map((tool) => {
          const data = tool.data as {
            message?: string;
            progress?: number;
            completed?: string[];
          } | undefined;

          return (
            <div key={tool.toolCallId ?? tool.name}>
              <strong>{tool.name}</strong>
              {data?.message && <p>{data.message}</p>}
              {data?.progress != null && (
                <div style={{ width: "100%", background: "#eee" }}>
                  <div
                    style={{
                      width: `${data.progress * 100}%`,
                      background: "#4CAF50",
                      height: 8,
                      transition: "width 0.3s ease",
                    }}
                  />
                </div>
              )}
              {data?.completed?.map((step, i) => (
                <div key={i}>&#10003; {step}</div>
              ))}
            </div>
          );
        })}
      </div>
    );
  }
  ```
</Accordion>

#### `tools` vs `custom` stream mode

Both stream modes can surface tool progress, but they serve different purposes:

* **`tools`**—automatically emits structured lifecycle events (`on_tool_start`, `on_tool_event`, `on_tool_end`, `on_tool_error`) with no code changes needed in your tools beyond using `async function*`. The `useStream` hook provides the reactive `toolProgress` array out of the box.
* **`custom`**—gives you full control over what data is emitted and when using `config.writer()`. Use this when you need freeform data that doesn't map to the tool lifecycle, or when you want to stream from nodes (not just tools).

### Subgraph outputs

To include outputs from [subgraphs](/oss/javascript/langgraph/use-subgraphs) in the streamed outputs, you can set `subgraphs: true` in the `.stream()` method of the parent graph. This will stream outputs from both the parent graph and any subgraphs.

The outputs will be streamed as tuples `[namespace, data]`, where `namespace` is a tuple with the path to the node where a subgraph is invoked, e.g. `["parent_node:<task_id>", "child_node:<task_id>"]`.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
for await (const chunk of await graph.stream(
  { foo: "foo" },
  {
    // Set subgraphs: true to stream outputs from subgraphs
    subgraphs: true,
    streamMode: "updates",
  }
)) {
  console.log(chunk);
}
```

<Accordion title="Extended example: streaming from subgraphs">
  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { StateGraph, StateSchema, START } from "@langchain/langgraph";
  import { z } from "zod/v4";

  // Define subgraph
  const SubgraphState = new StateSchema({
    foo: z.string(), // note that this key is shared with the parent graph state
    bar: z.string(),
  });

  const subgraphBuilder = new StateGraph(SubgraphState)
    .addNode("subgraphNode1", (state) => {
      return { bar: "bar" };
    })
    .addNode("subgraphNode2", (state) => {
      return { foo: state.foo + state.bar };
    })
    .addEdge(START, "subgraphNode1")
    .addEdge("subgraphNode1", "subgraphNode2");
  const subgraph = subgraphBuilder.compile();

  // Define parent graph
  const ParentState = new StateSchema({
    foo: z.string(),
  });

  const builder = new StateGraph(ParentState)
    .addNode("node1", (state) => {
      return { foo: "hi! " + state.foo };
    })
    .addNode("node2", subgraph)
    .addEdge(START, "node1")
    .addEdge("node1", "node2");
  const graph = builder.compile();

  for await (const chunk of await graph.stream(
    { foo: "foo" },
    {
      streamMode: "updates",
      // Set subgraphs: true to stream outputs from subgraphs
      subgraphs: true,
    }
  )) {
    console.log(chunk);
  }
  ```

  ```
  [[], {'node1': {'foo': 'hi! foo'}}]
  [['node2:dfddc4ba-c3c5-6887-5012-a243b5b377c2'], {'subgraphNode1': {'bar': 'bar'}}]
  [['node2:dfddc4ba-c3c5-6887-5012-a243b5b377c2'], {'subgraphNode2': {'foo': 'hi! foobar'}}]
  [[], {'node2': {'foo': 'hi! foobar'}}]
  ```

  **Note** that we are receiving not just the node updates, but we also the namespaces which tell us what graph (or subgraph) we are streaming from.
</Accordion>

<a id="debug" />

### Debug

Use the `debug` streaming mode to stream as much information as possible throughout the execution of the graph. The streamed outputs include the name of the node as well as the full state.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
for await (const chunk of await graph.stream(
  { topic: "ice cream" },
  { streamMode: "debug" }
)) {
  console.log(chunk);
}
```

### Multiple modes at once

You can pass an array as the `streamMode` parameter to stream multiple modes at once.

The streamed outputs will be tuples of `[mode, chunk]` where `mode` is the name of the stream mode and `chunk` is the data streamed by that mode.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
for await (const [mode, chunk] of await graph.stream(inputs, {
  streamMode: ["updates", "custom"],
})) {
  console.log(chunk);
}
```

## Advanced

### Use with any LLM

You can use `streamMode: "custom"` to stream data from **any LLM API**—even if that API does **not** implement the LangChain chat model interface.

This lets you integrate raw LLM clients or external services that provide their own streaming interfaces, making LangGraph highly flexible for custom setups.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { StateGraph, GraphNode, StateSchema } from "@langchain/langgraph";
import * as z from "zod";

const State = new StateSchema({ result: z.string() });

const callArbitraryModel: GraphNode<typeof State> = async (state, config) => {
  // Example node that calls an arbitrary model and streams the output
  // Assume you have a streaming client that yields chunks
  // Generate LLM tokens using your custom streaming client
  for await (const chunk of yourCustomStreamingClient(state.topic)) {
    // Use the writer to send custom data to the stream
    config.writer({ custom_llm_chunk: chunk });
  }
  return { result: "completed" };
};

const graph = new StateGraph(State)
  .addNode("callArbitraryModel", callArbitraryModel)
  // Add other nodes and edges as needed
  .compile();

// Set streamMode: "custom" to receive the custom data in the stream
for await (const chunk of await graph.stream(
  { topic: "cats" },
  { streamMode: "custom" }
)) {
  // The chunk will contain the custom data streamed from the llm
  console.log(chunk);
}
```

<Accordion title="Extended example: streaming arbitrary chat model">
  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  import { StateGraph, StateSchema, MessagesValue, GraphNode, START, LangGraphRunnableConfig } from "@langchain/langgraph";
  import { tool } from "@langchain/core/tools";
  import * as z from "zod";
  import OpenAI from "openai";

  const openaiClient = new OpenAI();
  const modelName = "gpt-5.4-mini";

  async function* streamTokens(modelName: string, messages: any[]) {
    const response = await openaiClient.chat.completions.create({
      messages,
      model: modelName,
      stream: true,
    });

    let role: string | null = null;
    for await (const chunk of response) {
      const delta = chunk.choices[0]?.delta;

      if (delta?.role) {
        role = delta.role;
      }

      if (delta?.content) {
        yield { role, content: delta.content };
      }
    }
  }

  // this is our tool
  const getItems = tool(
    async (input, config: LangGraphRunnableConfig) => {
      let response = "";
      for await (const msgChunk of streamTokens(
        modelName,
        [
          {
            role: "user",
            content: `Can you tell me what kind of items i might find in the following place: '${input.place}'. List at least 3 such items separating them by a comma. And include a brief description of each item.`,
          },
        ]
      )) {
        response += msgChunk.content;
        config.writer?.(msgChunk);
      }
      return response;
    },
    {
      name: "get_items",
      description: "Use this tool to list items one might find in a place you're asked about.",
      schema: z.object({
        place: z.string().describe("The place to look up items for."),
      }),
    }
  );

  const State = new StateSchema({
    messages: MessagesValue,
  });

  const callTool: GraphNode<typeof State> = async (state) => {
    const aiMessage = state.messages.at(-1);
    const toolCall = aiMessage.tool_calls?.at(-1);

    const functionName = toolCall?.function?.name;
    if (functionName !== "get_items") {
      throw new Error(`Tool ${functionName} not supported`);
    }

    const functionArguments = toolCall?.function?.arguments;
    const args = JSON.parse(functionArguments);

    const functionResponse = await getItems.invoke(args);
    const toolMessage = {
      tool_call_id: toolCall.id,
      role: "tool",
      name: functionName,
      content: functionResponse,
    };
    return { messages: [toolMessage] };
  };

  const graph = new StateGraph(State)
    // this is the tool-calling graph node
    .addNode("callTool", callTool)
    .addEdge(START, "callTool")
    .compile();
  ```

  Let's invoke the graph with an [`AIMessage`](https://reference.langchain.com/javascript/langchain-core/messages/AIMessage) that includes a tool call:

  ```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
  const inputs = {
    messages: [
      {
        content: null,
        role: "assistant",
        tool_calls: [
          {
            id: "1",
            function: {
              arguments: '{"place":"bedroom"}',
              name: "get_items",
            },
            type: "function",
          }
        ],
      }
    ]
  };

  for await (const chunk of await graph.stream(
    inputs,
    { streamMode: "custom" }
  )) {
    console.log(chunk.content + "|");
  }
  ```
</Accordion>

### Disable streaming for specific chat models

If your application mixes models that support streaming with those that do not, you may need to explicitly disable streaming for
models that do not support it.

Set `streaming: false` when initializing the model.

```typescript theme={"theme":{"light":"catppuccin-latte","dark":"catppuccin-mocha"}}
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "o1-preview",
  // Set streaming: false to disable streaming for the chat model
  streaming: false,
});
```

<Note>
  Not all chat model integrations support the `streaming` parameter. If your model doesn't support it, use `disableStreaming: true` instead. This parameter is available on all chat models via the base class.
</Note>

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/oss/langgraph/streaming.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
