- a ready-made chat runtime instead of wiring
stream.messagesyourself - a custom server endpoint that can add provider-specific behavior next to your deployed graph
- structured generative UI rendered from a constrained component registry
For CopilotKit-specific APIs, UI patterns, and runtime configuration, see the
CopilotKit docs.
How it works
At a high level, CopilotKit sits between your React app and the LangGraph deployment. The frontend sends conversation state to a custom/api/copilotkit route mounted alongside the graph API, that route forwards the request to LangGraph, and the response comes back with both assistant messages and any structured UI payloads your component registry can render.
- Deploy the graph as usual using LangSmith or using a LangGraph development server.
- Extend the deployment with an HTTP app that mounts a CopilotKit route next to the graph API.
- Wrap the frontend in
CopilotKitand point it at that custom runtime URL. - Register dynamic UI components and parse assistant responses into those components at render time.
Installation
For the backend endpoint:Extend the LangGraph deployment with a custom endpoint
The key idea is that the LangGraph deployment does not only serve graphs. It can also load an HTTP app, which lets you mount extra routes next to the deployment itself. Inlanggraph.json, point http.app at your custom app entrypoint:
app.ts
CopilotRuntime and point it back at the deployed graph using LangGraphAgent:
copilotkit.ts
output_schema and turns it into a structured responseFormat for the model:
agent.ts
useAgentContext({ description: "output_schema", ... }) useful on the frontend. The CopilotKit runtime forwards the schema, and the agent turns it into the structured output contract the model must follow.
The result is a clean separation of concerns:
- LangGraph still owns graph execution and persistence
- CopilotKit owns the chat-facing runtime contract
- your custom endpoint glues them together inside one deployment
Structure the frontend app
On the frontend, wrap your app inCopilotKit and point it at the custom runtime URL:
runtimeUrl="/api/copilotkit"sends the chat to your custom backend route rather than directly to the raw LangGraph APIuseAgentContext(...)sends the UI schema to the agent so the model knows what structured output format it should produce
Register the dynamic components
The component registry lives inuseChatKit(). This is where you define the set of components the agent is allowed to emit, such as cards, rows, columns, charts, code blocks, and buttons.
Render assistant messages as dynamic UI
Once the assistant response arrives, the custom message renderer decides how to display it. In this example:- assistant messages are parsed as structured JSON against the UI kit schema
- valid structured output is rendered as real React components
- user messages are rendered as ordinary chat bubbles
- CopilotKit handles chat state and transport
- the custom renderer decides how assistant payloads become UI
- Hashbrown turns validated structured data into concrete React elements
Best practices
- Keep the custom endpoint thin: use it to adapt CopilotKit to your graph deployment, not to duplicate business logic already inside the graph
- Send the schema explicitly:
useAgentContextshould describe the UI contract every time the page mounts - Register a constrained component set: expose only the components and props you actually want the model to use
- Treat rendering as a parsing step: parse assistant content against your schema before rendering it
- Keep user messages plain: only assistant messages need the structured renderer; user messages can stay normal chat bubbles
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

