Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

LangChain agents are built on LangGraph, so they support the same Event Streaming model with agent-focused projections for messages, tool calls, state, and custom updates. For most application and frontend use cases, use Event Streaming through stream_events(..., version="v3"). Event Streaming returns a run object with typed projections, so you can choose the view you need instead of parsing stream-mode tuples.
Check out the streaming cookbook for runnable examples and links to detailed reference documentation.
Interested in streaming Pregel modes such as updates, messages, or custom directly? See the Streaming page.
import { createAgent, tool } from "langchain";
import * as z from "zod";

const getWeather = tool(
  async ({ city }) => `It's always sunny in ${city}!`,
  {
    name: "get_weather",
    description: "Get weather for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const agent = createAgent({
  model: "gpt-5-nano",
  tools: [getWeather],
});

const run = await agent.streamEvents(
  { messages: [{ role: "user", content: "What is the weather in SF?" }] },
  { version: "v3" }
);

for await (const message of run.messages) {
  for await (const delta of message.text) {
    process.stdout.write(delta);
  }
}

const finalState = await run.output;

What you can stream

ProjectionUse
for event in runRaw protocol events when you need exact arrival order.
run.messagesModel message streams, one per LLM call.
message.textText deltas and final text for a message.
message.reasoningReasoning deltas for models that expose reasoning content.
message.tool_callsTool-call argument chunks and finalized tool calls.
message.outputFinal message object after the model call completes.
message.usageToken usage metadata when the provider returns it.
run.valuesAgent state snapshots.
run.outputFinal agent state.
run.extensionsCustom transformer projections.
| run.toolCalls | Tool execution lifecycle, inputs, output deltas, final output, and errors. | run.messages yields message streams. Each message stream exposes .text, .reasoning, .toolCalls, .output, and .usage. Async projections can be iterated for live deltas or awaited for final values.

Stream agent messages

Use run.messages when you want model output from each LLM call.
const run = await agent.streamEvents(input, { version: "v3" });

for await (const message of run.messages) {
  process.stdout.write(`[${message.node}] `);
  for await (const delta of message.text) {
    process.stdout.write(delta);
  }

  const fullMessage = await message.output;
  console.log(fullMessage.content);

  const usage = await message.usage;
  if (usage) {
    console.log(usage);
  }
}

Stream tool calls

There are two useful tool-call projections:
  • message.tool_calls streams tool-call argument chunks while the model is producing the tool call.
  • run.tool_calls streams the lifecycle of tool execution after the tool call starts.
const run = await agent.streamEvents(input, { version: "v3" });

await Promise.all([
  (async () => {
    for await (const message of run.messages) {
      for await (const chunk of message.toolCalls) {
        console.log("tool call chunk", chunk);
      }
    }
  })(),
  (async () => {
    for await (const call of run.toolCalls) {
      console.log(call.name, call.input);
      console.log(await call.output, await call.error);
    }
  })(),
]);

Stream state and final output

Use run.values for state snapshots and run.output for the final agent state.
const run = await agent.streamEvents(input, { version: "v3" });

for await (const snapshot of run.values) {
  console.log(snapshot);
}

const finalState = await run.output;