Skip to main content
The useStream hook from @langchain/react is UI-agnostic. It returns plain reactive state that you wire up to any component library you already use. This page shows how two popular libraries, AI Elements and assistant-ui, integrate with useStream to give you a fully-featured chat UI with minimal custom code.

How it works

useStream handles everything related to the LangGraph connection: streaming messages, thread management, tool call state, and reconnection. The library of your choice handles the visual layer. The integration is always the same shape:
import { useStream } from "@langchain/react";

function Chat() {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
  });

  return <YourUILibraryHere messages={stream.messages} onSubmit={stream.submit} />;
}
This separation means you can switch, replace, or compose UI libraries without touching any of the agent wiring.

AI Elements

AI Elements is a composable, shadcn/ui-based component library purpose-built for AI chat interfaces. Components like Conversation, Message, Tool, Reasoning, and PromptInput are designed to be dropped directly into any React project.
Clone and run the full AI Elements example to see tool call rendering, reasoning display, streaming messages, and more in a working project.

Installation

Install AI Elements components via the CLI. Components are added as source files directly into your project (shadcn/ui registry style):
bunx ai-elements@latest add conversation message prompt-input tool reasoning suggestion

Wiring useStream

Render AI Elements components directly from stream.messages. Each LangChain BaseMessage maps to a component:
import { useStream } from "@langchain/react";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

import {
  Conversation,
  ConversationContent,
  ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import { Message, MessageContent, MessageResponse } from "@/components/ai-elements/message";
import { Tool, ToolHeader, ToolContent, ToolInput, ToolOutput } from "@/components/ai-elements/tool";
import { Reasoning, ReasoningTrigger, ReasoningContent } from "@/components/ai-elements/reasoning";
import {
  PromptInput,
  PromptInputBody,
  PromptInputTextarea,
  PromptInputFooter,
  PromptInputSubmit,
} from "@/components/ai-elements/prompt-input";

export function Chat() {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
  });

  return (
    <div className="flex flex-col h-dvh">
      <Conversation className="flex-1">
        <ConversationContent>
          {stream.messages.map((msg, i) => {
            if (HumanMessage.isInstance(msg)) {
              return (
                <Message key={i} from="user">
                  <MessageContent>{msg.content as string}</MessageContent>
                </Message>
              );
            }
            if (AIMessage.isInstance(msg)) {
              return (
                <div key={i}>
                  {/* Render reasoning block when model emits thinking tokens */}
                  <Reasoning>
                    <ReasoningTrigger />
                    <ReasoningContent>{getReasoningText(msg)}</ReasoningContent>
                  </Reasoning>
                  {/* Render inline tool calls */}
                  {getToolCalls(msg).map((tc) => (
                    <Tool key={tc.id} defaultOpen>
                      <ToolHeader type={`tool-${tc.name}`} state={tc.state} />
                      <ToolContent>
                        <ToolInput input={tc.args} />
                        {tc.output && <ToolOutput output={tc.output} errorText={undefined} />}
                      </ToolContent>
                    </Tool>
                  ))}
                  {/* Render the text response */}
                  <Message from="assistant">
                    <MessageContent>
                      <MessageResponse>{getTextContent(msg)}</MessageResponse>
                    </MessageContent>
                  </Message>
                </div>
              );
            }
          })}
        </ConversationContent>
        <ConversationScrollButton />
      </Conversation>

      <PromptInput onSubmit={({ text }) => stream.submit({ messages: [{ type: "human", content: text }] })}>
        <PromptInputBody>
          <PromptInputTextarea placeholder="Ask me something..." />
        </PromptInputBody>
        <PromptInputFooter>
          <PromptInputSubmit status={stream.isLoading ? "streaming" : "ready"} />
        </PromptInputFooter>
      </PromptInput>
    </div>
  );
}
AI Elements works with any React setup — Vite, Next.js, or Remix. Components ship as source files in your project, so you can customise them freely without forking a library.

assistant-ui

assistant-ui is a headless React UI framework for AI chat. It provides a full runtime layer — thread management, message branching, attachment handling — that connects to useStream via the useExternalStoreRuntime adapter.
Clone and run the full assistant-ui example to see a Claude-style chat interface wired to a LangChain agent with useExternalStoreRuntime.

Installation

bun add @assistant-ui/react @assistant-ui/react-markdown

Wiring useStream

The useExternalStoreRuntime adapter bridges stream.messages into the assistant-ui runtime. Pass it to AssistantRuntimeProvider and then use any assistant-ui thread component:
import { useCallback, useMemo } from "react";
import {
  AssistantRuntimeProvider,
  useExternalStoreRuntime,
  type AppendMessage,
  type ThreadMessageLike,
} from "@assistant-ui/react";
import { useStream } from "@langchain/react";
import { Thread } from "@assistant-ui/react";

export function Chat() {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
  });

  const onNew = useCallback(
    async (message: AppendMessage) => {
      const text = message.content
        .filter((c) => c.type === "text")
        .map((c) => c.text)
        .join("");
      await stream.submit({ messages: [{ type: "human", content: text }] });
    },
    [stream],
  );

  // Convert LangChain messages to assistant-ui's ThreadMessageLike format
  const messages = useMemo(() => toThreadMessages(stream.messages), [stream.messages]);

  const runtime = useExternalStoreRuntime<ThreadMessageLike>({
    messages,
    onNew,
    onCancel: () => stream.stop(),
    convertMessage: (m) => m,
  });

  return (
    <AssistantRuntimeProvider runtime={runtime}>
      <Thread />
    </AssistantRuntimeProvider>
  );
}
assistant-ui ships a full thread UI out of the box via <Thread />, including a message list, composer, and scroll management. Customise individual parts — messages, tool UIs, attachments — by overriding the component slots.

Choosing a library

Both libraries connect to useStream the same way. The choice depends on how much control you need:
AI Elementsassistant-ui
StyleComposable components (shadcn/ui)Headless slots + default theme
CustomisationEdit source files directlyOverride component slots
Tool call UITool / ToolHeader / ToolOutputCustom tool UI via slots
Reasoning displayReasoning / ReasoningContentCustom via message slots
Thread managementuseStream + manual stateBuilt-in via AssistantRuntimeProvider
Best forProjects with an existing design systemFull-featured chat with minimal setup
Both work with any @langchain/react-compatible agent backend — createAgent, createDeepAgent, or any custom LangGraph graph.