useStream React hook provides seamless integration with LangGraph streaming capabilities. It handles all the complexities of streaming, state management, and branching logic, letting you focus on building great generative UI experiences.
Key features:
- Messages streaming — Handle a stream of message chunks to form a complete message
- Automatic state management — for messages, interrupts, loading states, and errors
- Conversation branching — Create alternate conversation paths from any point in the chat history
- UI-agnostic design — Bring your own components and styling
Installation
Install the LangGraph SDK to use theuseStream hook in your React application:
Basic usage
TheuseStream hook connects to any LangGraph graph, whether that’s running on from your own endpoint, or deployed using LangSmith deployments.
`useStream` parameters
`useStream` parameters
The ID of the agent to connect to. When using LangSmith deployments, this must match the agent ID shown in your deployment dashboard. For custom API deployments or local development, this can be any string that your server uses to identify the agent.
The URL of the LangGraph server. Defaults to
http://localhost:2024 for local development.API key for authentication. Required when connecting to deployed agents on LangSmith.
Connect to an existing thread instead of creating a new one. Useful for resuming conversations.
Callback invoked when a new thread is created. Use this to persist the thread ID for later use.
Automatically resume an ongoing run when the component mounts. Set to
true to use session storage, or provide a custom storage function.Callback invoked when a new run is created. Useful for persisting run metadata for resumption.
Callback invoked when an error occurs during streaming.
Callback invoked when the stream completes successfully with the final state.
Handle custom events emitted from your agent using the
writer. See Custom streaming events.Handle state update events after each graph step.
Handle metadata events with run and thread information.
The key in the graph state that contains the messages array.
Batch state updates for better rendering performance. Disable for immediate updates.
Initial state values to display while the first stream is loading. Useful for showing cached thread data immediately.
`useStream` return values
`useStream` return values
All messages in the current thread, including both human and AI messages.
The current graph state values. Type is inferred from the agent or graph type parameter.
Whether a stream is currently in progress. Use this to show loading indicators.
Any error that occurred during streaming.
null when no error.Current interrupt requiring user input, such as human-in-the-loop approval requests.
All tool calls across all messages, with their results and state (
pending, completed, or error).Submit new input to the agent. Pass
null as input when resuming from an interrupt with a command. Options include checkpoint for branching, optimisticValues for optimistic updates, and threadId for optimistic thread creation.Stop the current stream immediately.
Resume an existing stream by run ID. Use with
onCreated for manual stream resumption.Switch to a different branch in the conversation history.
Get all tool calls for a specific AI message.
Get metadata for a message, including streaming info like
langgraph_node for identifying the source node, and firstSeenState for branching.Tree representation of the thread for advanced branching controls in non-message based graphs.
Thread management
Keep track of conversations with built-in thread management. You can access the current thread ID and get notified when new threads are created:threadId to let users resume conversations after page refreshes.
Resume after page refresh
TheuseStream hook can automatically resume an ongoing run upon mounting by setting reconnectOnMount: true. This is useful for continuing a stream after a page refresh, ensuring no messages and events generated during the downtime are lost.
window.sessionStorage, which can be swapped by passing a custom storage function:
joinStream to resume:
Try the session persistence example
See a complete implementation of stream resumption with
reconnectOnMount and thread persistence in the session-persistence example.Optimistic updates
You can optimistically update the client state before performing a network request, providing immediate feedback to the user:Optimistic thread creation
Use thethreadId option in submit to enable optimistic UI patterns where you need to know the thread ID before the thread is created:
Cached thread display
Use theinitialValues option to display cached thread data immediately while the history is being loaded from the server:
Branching
Create alternate conversation paths by editing previous messages or regenerating AI responses. UsegetMessagesMetadata() to access checkpoint information for branching:
experimental_branchTree property to get the tree representation of the thread for non-message based graphs.
Try the branching example
See a complete implementation of conversation branching with edit, regenerate, and branch switching in the
branching-chat example.Type-safe streaming
TheuseStream hook supports full type inference when used with agents created via createAgent or graphs created with StateGraph. Pass typeof agent or typeof graph as the type parameter to automatically infer tool call types.
With createAgent
When using createAgent, tool call types are automatically inferred from the tools you register to your agent:
With StateGraph
For custom StateGraph applications, the state types are inferred from the graph’s annotation:
With Annotation types
If you’re using LangGraph.js, you can reuse your graph’s annotation types. Make sure to only import types to avoid importing the entire LangGraph.js runtime:Advanced type configuration
You can specify additional type parameters for interrupts, custom events, and configurable options:Rendering tool calls
UsegetToolCalls to extract and render tool calls from AI messages. Tool calls include the call details, result (if completed), and state.
Try the tool calling example
See a complete implementation of tool call rendering with weather, calculator, and note-taking tools in the
tool-calling-agent example.Custom streaming events
Stream custom data from your agent using thewriter in your tools or nodes. Handle these events in the UI with the onCustomEvent callback.
Try the custom streaming example
See a complete implementation of custom events with progress bars, status badges, and file operation cards in the
custom-streaming example.Event handling
TheuseStream hook provides callback options that give you access to different types of streaming events. You don’t need to explicitly configure stream modes—just pass callbacks for the event types you want to handle:
Available callbacks
| Callback | Description | Stream mode |
|---|---|---|
onUpdateEvent | Called when a state update is received after each graph step | updates |
onCustomEvent | Called when a custom event is received from your graph | custom |
onMetadataEvent | Called with run and thread metadata | metadata |
onError | Called when an error occurs | - |
onFinish | Called when the stream completes | - |
Multi-agent streaming
When working with multi-agent systems or graphs with multiple nodes, use message metadata to identify which node generated each message. This is particularly useful when multiple LLMs run in parallel and you want to display their outputs with distinct visual styling.Try the parallel research example
See a complete implementation of multi-agent streaming with three parallel researchers and distinct visual styling in the
parallel-research example.Human-in-the-loop
Handle interrupts when the agent requires human approval for tool execution. Learn more in the How to handle interrupts guide.Try the human-in-the-loop example
See a complete implementation of approval workflows with approve, reject, and edit actions in the
human-in-the-loop example.Reasoning models
When using models with extended reasoning capabilities (like OpenAI’s reasoning models or Anthropic’s extended thinking), the thinking process is embedded in the message content. You’ll need to extract and display it separately.Try the reasoning example
See a complete implementation of reasoning token display with OpenAI and Anthropic models in the
reasoning-agent example.Custom state types
For custom LangGraph applications, embed your tool call types in your state’s messages property.Custom transport
For custom API endpoints or non-standard deployments, use thetransport option with FetchStreamTransport to connect to any streaming API.
Related
- Streaming overview — Server-side streaming with LangChain agents
- useStream API Reference — Full API documentation
- Agent Chat UI — Pre-built chat interface for LangGraph agents
- Human-in-the-loop — Configuring interrupts for human review
- Multi-agent systems — Building agents with multiple LLMs