Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

Event streaming is the recommended in-process streaming model for most LangGraph application code. It returns a run stream object that can be consumed in multiple ways at the same time.

Quickstart

const stream = await graph.streamEvents(
  { messages: [{ role: "user", content: "What is 42 * 17?" }] },
  { version: "v3" }
);

for await (const message of stream.messages) {
  for await (const token of message.text) {
    process.stdout.write(token);
  }
}

const finalState = await stream.output;
To stream against a graph deployed behind an Agent Server, see the LangSmith Streaming API.

How the pieces fit together

The streaming stack has two main layers:
  1. Streaming emits raw graph execution events from the Pregel engine.
  2. Event streaming normalizes those events, runs them through stream transformers, and exposes typed projections.
Pregel engine
Runs graph steps
emits
Raw Pregel events
updates, values, messages, custom, checkpoints, tasks, debug
sent to
Event router
Routes each event through the transformer pipeline
cascades through
Stream transformers
ValuesTransformer
MessagesTransformer
Custom transformers
produces
Event Stream
Projected events for application code
The event router is the bridge between the two layers. It receives normalized Pregel events and passes each event through the registered stream transformers. Built-in transformers create standard projections such as stream.messages, stream.values, stream.subgraphs, and stream.output. Custom transformers can add application-specific projections under stream.extensions.

What event streaming provides

The run stream exposes typed projections over one underlying event flow:
ProjectionUse
streamIterate every protocol event.
stream.messagesStream chat model messages and token deltas.
stream.valuesIterate state snapshots and await the final value.
stream.outputAwait the final output.
stream.subgraphsDiscover and observe nested graph executions.
stream.interruptsInspect human-in-the-loop interrupt payloads.
stream.interruptedCheck whether the run paused for human input.
stream.extensionsConsume custom stream transformer projections.
Multiple consumers can read these projections concurrently. Reading stream.messages does not consume events needed by stream.values, stream.subgraphs, or stream.output. Event streaming sits one level above streaming, which exposes raw graph execution events through stream_mode modes such as updates, values, messages, custom, checkpoints, tasks, and debug. Use streaming when you need low-level access to those modes; use event streaming when application code benefits from typed projections.

Stream messages

Use stream.messages for chat model output:
const stream = await graph.streamEvents(input, { version: "v3" });

for await (const message of stream.messages) {
  const text = await message.text;
  const usage = await message.usage;

  console.log(text);
  console.log(usage);
}
message.text is both an async iterable and a promise-like value. Iterate it for token-by-token output, or await it for the complete text.

Stream subgraphs

Use stream.subgraphs to observe nested graph work without parsing namespace strings:
const stream = await graph.streamEvents(input, { version: "v3" });

for await (const subgraph of stream.subgraphs) {
  console.log(subgraph.name, subgraph.path);

  for await (const message of subgraph.messages) {
    console.log(await message.text);
  }
}
For product-specific streams, see Deep Agents streaming for subagent streams and LangChain agent streaming for tool calls and middleware events.

Stream state

Use stream.values to stream full state snapshots after each step:
const stream = await graph.streamEvents(input, { version: "v3" });

for await (const snapshot of stream.values) {
  console.log(snapshot);
}

const finalState = await stream.output;

Stream multiple projections

Use concurrent consumers when you need multiple projections in JavaScript:
await Promise.all([
  (async () => {
    for await (const message of stream.messages) {
      console.log(await message.text);
    }
  })(),
  (async () => {
    for await (const subgraph of stream.subgraphs) {
      console.log(subgraph.path);
    }
  })(),
]);

Resume after an interrupt

When a graph pauses for human input, inspect stream.interrupted and stream.interrupts, then resume by calling stream_events(..., version="v3") again with Command. Resume requires a graph compiled with a checkpointer and a config carrying a thread ID — see persistence.
import { Command } from "@langchain/langgraph";

let stream = await graph.streamEvents(input, { version: "v3" });

for await (const message of stream.messages) {
  console.log(await message.text);
}

if (stream.interrupted) {
  console.log(stream.interrupts);
}

stream = await graph.streamEvents(
  new Command({ resume: { decisions: [{ type: "approve" }] } }),
  { version: "v3" }
);
const finalState = await stream.output;

Stream all protocol events

Use the run object itself when you want the raw protocol event stream:
const stream = await graph.streamEvents(
  { messages: [{ role: "user", content: "What is 42 * 17?" }] },
  { version: "v3" }
);

for await (const event of stream) {
  const namespace = event.params.namespace;
  console.log(namespace, event.method, event.params.data);
}
Each event is a ProtocolEvent envelope wrapping a channel-specific payload. The same shape is what a transformer’s process(event) receives.
interface ProtocolEvent {
  readonly seq: number;         // strictly increasing within a run; use for ordering
  readonly method: string;      // channel name: "messages", "values", "updates", "custom", "tools", "lifecycle", ...
  readonly params: {
    readonly namespace: string[];  // path of "<name>:<runtime_id>" segments from the root graph; [] is the root
    readonly timestamp: number;    // wall-clock milliseconds; can drift, don't rely on for ordering
    readonly node?: string;        // graph node that emitted this event, when applicable
    readonly data: unknown;        // channel-specific payload; shape depends on `method`
  };
}
The namespace is a path from the root graph to the scope that emitted the event. The root is the empty array []. Each child execution adds one "name:runtime_id" segment, so a nested tool call inside a subgraph looks like ["researcher:6f4d", "tools:91ac"]. The name before : is the stable graph or node name; the suffix is a per-invocation runtime ID. Filter raw events by namespace yourself when you only care about a specific subtree — stream.subgraphs already does this for nested graph executions.

Channels and event lifecycle

Raw events flow on channels. The channel name appears as the event’s method; each channel emits a specific event shape.
ChannelPurpose
valuesFull graph state snapshots.
updatesPer-node state deltas.
messagesContent-block-centric chat model output.
toolsTool call start, streamed output, finish, and error events.
lifecycleRun, subgraph, and subagent status changes.
checkpointsLightweight checkpoint envelopes for branching and time travel.
inputHuman-in-the-loop input requests and responses.
tasksPregel task creation and result events.
customUser-defined payloads from graph code.
custom:<name>Application-defined stream transformer output.
The typed projections (stream.messages, stream.values, etc.) are built from these channels. The channel name appears as the method field on raw events when you iterate the run object directly.

Messages

The messages channel models output as content blocks. The data’s event field is one of:
  • message-start
  • content-block-start
  • content-block-delta
  • content-block-finish
  • message-finish
Content blocks have explicit boundaries: a block starts, emits zero or more deltas, and finishes before the next block in the same message starts. This makes token streaming, reasoning blocks, tool-call blocks, and multimodal content explicit without requiring provider-specific formats. message-finish may include token usage; unrecoverable model-call failures arrive as message error events. To consume raw content-block events directly instead of using the stream.messages projection:
for await (const event of stream) {
  if (event.method !== "messages") continue;

  const data = event.params.data;
  if (data.event !== "content-block-delta") continue;

  const block = data.delta ?? {};
  if (block.type === "text-delta") {
    process.stdout.write(block.text ?? "");
  } else if (block.type === "reasoning-delta") {
    process.stdout.write(`[thinking]${block.reasoning ?? ""}`);
  }
}

Tools

The tools channel exposes tool execution. The data’s event field is one of:
  • tool-started
  • tool-output-delta
  • tool-finished
  • tool-error
Tool events are correlated by tool call ID, so a tool execution can be joined back to its originating tool-call content block on the messages channel.

Lifecycle

The lifecycle channel tracks root run, subgraph, and subagent status. The data’s event field is one of:
  • started
  • running
  • completed
  • failed
  • interrupted
Beyond event, lifecycle data may include an optional graph_name, error, and cause describing why a child scope started (parent tool call, fan-out send, edge transition).

Build your own projection

Stream transformers are the projection layer in event streaming. They observe protocol events, keep their own state, and expose derived views of a run — things like tool activity, token totals, progress events, artifacts, or messages for another protocol. StreamChannel is the projection primitive transformers use to publish those views. Built-in projections (stream.messages, stream.values, stream.subgraphs, stream.output) and product-specific projections (LangChain’s stream.tool_calls, Deep Agents’ stream.subagents) are themselves transformers using this same contract. User transformers stack on top via compile-time or call-time registration, and their projections appear under stream.extensions. Write one when the existing projections don’t match the shape an application needs.

How transformers work

Event streaming starts with streaming output from the LangGraph Pregel engine. The runtime normalizes those chunks into protocol events, then a stream handler routes each event through a stack of stream transformers. The stream handler is the central dispatcher for one stream. For every protocol event, it:
  1. Calls each registered transformer’s process(event) hook in order.
  2. Wires named StreamChannel pushes back onto the protocol event stream.
  3. Stores the event in the run stream unless a transformer suppresses it.
  4. Calls finalize() or fail() on every transformer when the run ends.
Transformers are observational. They do not call back into the graph runtime. Instead, they consume events and push derived values into StreamChannel, promises, or other projection objects.

Transformer shape

A transformer implements the StreamTransformer interface:
interface StreamTransformer<TProjection = unknown> {
  init(): TProjection;
  process(event: ProtocolEvent): boolean;
  finalize?(): void | PromiseLike<void>;
  fail?(err: unknown): void;
}
  • init() creates the projection object. User transformer projections appear under stream.extensions.
  • process() observes each protocol event. See Stream all protocol events for the ProtocolEvent shape. Return false only when you intentionally want to suppress the original event.
  • finalize() closes or resolves non-channel projections after a successful stream.
  • fail() propagates errors to non-channel projections.

Declaring required stream modes

required_stream_modes controls which Pregel stream modes the underlying graph emits during the stream. The runtime takes the union of every registered transformer’s required_stream_modes and passes that union as the stream_mode argument to the graph’s .stream() call. Modes that no transformer requests are never emitted — declaring ("custom",) is what causes custom events to flow through the run at all. process() receives every event the graph emits and is responsible for filtering by event["method"]. The declaration turns on upstream emission; it does not narrow what process() sees. Valid values are the Pregel stream modes: "messages", "tools", "custom", "values", "updates", "checkpoints", "tasks", "debug". Each transformer must declare every mode it acts on — an omitted mode is not emitted by the graph and never reaches process().

StreamChannel

StreamChannel is the projection primitive a transformer uses for streaming values. It always exposes an iterable stream on stream.extensions.<name>. The constructor argument decides whether each push() also flows into the run’s main event stream as a custom:<name> event—that is, whether the projection’s values show up when iterating raw protocol events.
NeedUse
Side-channel projection onlynew StreamChannel<T>()
Also flow each push into the main event streamnew StreamChannel<T>(name)
Named channel payloads must be serializable, because each pushed value also becomes a custom:<name> protocol event in the main stream. Keep promises, async iterables, class instances, and other in-process handles in unnamed channels. The stream handler owns channel lifecycle. Once init() returns a channel, the handler closes or fails it for you when the run ends. Transformers only push values.

Example: named channel

Pass a string name to StreamChannel to expose a streaming projection through stream.extensions and forward each pushed value into the run’s main event stream as a custom:<name> protocol event:
import { StreamChannel } from "@langchain/langgraph";

const toolActivityTransformer = () => {
  const activity = new StreamChannel<{
    name: string;
    status: "started" | "finished" | "error";
  }>("toolActivity");

  return {
    init: () => ({ toolActivity: activity }),
    process(event) {
      if (event.method === "tools") {
        const data = event.params.data as { tool_name?: string; event?: string };
        if (data.tool_name && data.event) {
          activity.push({
            name: data.tool_name,
            status: data.event === "tool-error" ? "error" : "started",
          });
        }
      }
      return true;
    },
  };
};

Example: unnamed channel

Without a name, the channel is a side-channel projection only — accessible on stream.extensions but not visible to consumers iterating raw events. This is the right choice for projections that hold in-process handles (promises, async iterables, class instances) that can’t be serialized onto the main event stream. The example below pairs an unnamed channel with get_stream_writer, which lets graph nodes emit custom-channel events that the transformer then drains into the projection:
import { StreamChannel } from "@langchain/langgraph";

const customTransformer = () => {
  const custom = new StreamChannel<unknown>();

  return {
    init: () => ({ custom }),
    process(event) {
      if (event.method === "custom") {
        custom.push(event.params.data);
      }
      return true;
    },
  };
};

Example: final-value projection

Use unnamed streams, promises, or other in-process objects when the projection should not flow into the main event stream:
const statsTransformer = () => {
  let totalTokens = 0;
  let resolveTotal!: (value: number) => void;
  const totalTokensPromise = new Promise<number>((resolve) => {
    resolveTotal = resolve;
  });

  return {
    init: () => ({ totalTokens: totalTokensPromise }),
    process(event) {
      if (event.method === "messages") {
        const data = event.params.data as { usage?: { output_tokens?: number } };
        totalTokens += data.usage?.output_tokens ?? 0;
      }
      return true;
    },
    finalize: () => resolveTotal(totalTokens),
  };
};

Register at call time or compile time

Pass transformers at call time for local experimentation:
const stream = await graph.streamEvents(input, {
  version: "v3",
  transformers: [statsTransformer, toolActivityTransformer],
});
Compile transformers into the graph when every run of that graph should produce the projection:
const graph = builder.compile({
  transformers: [statsTransformer, toolActivityTransformer],
});

Built-in: ToolCallTransformer

LangGraph defines the streaming primitives. For using streaming with LangChain or Deep Agents, review the relevant product docs: The wire-level event and command formats are defined in the Agent Protocol repository and consumable as langchain-protocol on PyPI and @langchain/protocol on npm.