Use this file to discover all available pages before exploring further.
Messages are the fundamental unit of context for models in LangChain. They represent the input and output of models, carrying both the content and metadata needed to represent the state of a conversation when interacting with an LLM.Messages are objects that contain:
Role - Identifies the message type (e.g. system, user)
Content - Represents the actual content of the message (like text, images, audio, documents, etc.)
Metadata - Optional fields such as response information, message IDs, and token usage
LangChain provides a standard message type that works across all model providers, ensuring consistent behavior regardless of the model being called.
The simplest way to use messages is to create message objects and pass them to a model when invoking.
import { initChatModel, HumanMessage, SystemMessage } from "langchain";const model = await initChatModel("gpt-5-nano");const systemMsg = new SystemMessage("You are a helpful assistant.");const humanMsg = new HumanMessage("Hello, how are you?");const messages = [systemMsg, humanMsg];const response = await model.invoke(messages); // Returns AIMessage
Alternatively, you can pass in a list of messages to the model by providing a list of message objects.
import { SystemMessage, HumanMessage, AIMessage } from "langchain";const messages = [ new SystemMessage("You are a poetry expert"), new HumanMessage("Write a haiku about spring"), new AIMessage("Cherry blossoms bloom..."),];const response = await model.invoke(messages);
Use message prompts when:
Managing multi-turn conversations
Working with multimodal content (images, audio, files)
A SystemMessage represent an initial set of instructions that primes the model’s behavior. You can use a system message to set the tone, define the model’s role, and establish guidelines for responses.
Basic instructions
import { SystemMessage, HumanMessage, AIMessage } from "langchain";const systemMsg = new SystemMessage("You are a helpful coding assistant.");const messages = [ systemMsg, new HumanMessage("How do I create a REST API?"),];const response = await model.invoke(messages);
Detailed persona
import { SystemMessage, HumanMessage } from "langchain";const systemMsg = new SystemMessage(`You are a senior TypeScript developer with expertise in web frameworks.Always provide code examples and explain your reasoning.Be concise but thorough in your explanations.`);const messages = [ systemMsg, new HumanMessage("How do I create a REST API?"),];const response = await model.invoke(messages);
An AIMessage represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
AIMessage objects are returned by the model when calling it, which contains all of the associated metadata in the response.Providers weigh/contextualize types of messages differently, which means it is sometimes helpful to manually create a new AIMessage object and insert it into the message history as if it came from the model.
import { AIMessage, SystemMessage, HumanMessage } from "langchain";const aiMsg = new AIMessage("I'd be happy to help you with that question!");const messages = [ new SystemMessage("You are a helpful assistant"), new HumanMessage("Can you help me?"), aiMsg, // Insert as if it came from the model new HumanMessage("Great! What's 2+2?")]const response = await model.invoke(messages);
For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model.Tools can generate ToolMessage objects directly. Below, we show a simple example. Read more in the tools guide.
import { AIMessage, ToolMessage } from "langchain";const aiMessage = new AIMessage({ content: [], tool_calls: [{ name: "get_weather", args: { location: "San Francisco" }, id: "call_123" }]});const toolMessage = new ToolMessage({ content: "Sunny, 72°F", tool_call_id: "call_123"});const messages = [ new HumanMessage("What's the weather in San Francisco?"), aiMessage, // Model's tool call toolMessage, // Tool execution result];const response = await model.invoke(messages); // Model processes the result
Additional data not sent to the model but can be accessed programmatically.
The artifact field stores supplementary data that won’t be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model’s context.
Example: Using artifact for retrieval metadata
For example, a retrieval tool could retrieve a passage from a document for reference by a model. Where message content contains text that the model will reference, an artifact can contain document identifiers or other metadata that an application can use (e.g., to render a page). See example below:
import { ToolMessage } from "langchain";// Artifact available downstreamconst artifact = { document_id: "doc_123", page: 0 };const toolMessage = new ToolMessage({ content: "It was the best of times, it was the worst of times.", tool_call_id: "call_123", name: "search_books", artifact});
See the RAG tutorial for an end-to-end example of building retrieval agents with LangChain.
You can think of a message’s content as the payload of data that gets sent to the model. Messages have a content attribute that is loosely-typed, supporting strings and lists of untyped objects (e.g., dictionaries). This allows support for provider-native structures directly in LangChain chat models, such as multimodal content and other data.Separately, LangChain provides dedicated content types for text, reasoning, citations, multi-modal data, server-side tool calls, and other message content. See content blocks below.LangChain chat models accept message content in the content attribute.This may contain either:
A string
A list of content blocks in a provider-native format
LangChain provides a standard representation for message content that works across providers.Message objects implement a contentBlocks property that will lazily parse the content attribute into a standard, type-safe representation. For example, messages generated from ChatAnthropic or ChatOpenAI will include thinking or reasoning blocks in the format of the respective provider, but can be lazily parsed into a consistent ReasoningContentBlock representation:
See the integrations guides to get started with the
inference provider of your choice.
Serializing standard contentIf an application outside of LangChain needs access to the standard content block
representation, you can opt-in to storing content blocks in message content.To do this, you can set the LC_OUTPUT_VERSION environment variable to v1. Or,
initialize any chat model with outputVersion: "v1":
import { initChatModel } from "langchain";const model = await initChatModel( "gpt-5-nano", { outputVersion: "v1" });
Multimodality refers to the ability to work with data that comes in different
forms, such as text, audio, images, and video. LangChain includes standard types
for these data that can be used across providers.Chat models can accept multimodal data as input and generate
it as output. Below we show short examples of input messages featuring multimodal data.
Extra keys can be included top-level in the content block or nested in "extras": {"key": value}.OpenAI and AWS Bedrock Converse,
for example, require a filename for PDFs. See the provider page
for your chosen model for specifics.
// From URLconst message = new HumanMessage({ content: [ { type: "text", text: "Describe the content of this image." }, { type: "image", source_type: "url", url: "https://example.com/path/to/image.jpg" }, ],});// From base64 dataconst message = new HumanMessage({ content: [ { type: "text", text: "Describe the content of this image." }, { type: "image", source_type: "base64", data: "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...", }, ],});// From provider-managed File IDconst message = new HumanMessage({ content: [ { type: "text", text: "Describe the content of this image." }, { type: "image", source_type: "id", id: "file-abc123" }, ],});
Not all models support all file types. Check the model provider’s reference for supported formats and size limits.
Content blocks are represented (either when creating a message or accessing the contentBlocks field) as a list of typed objects. Each item in the list must adhere to one of the following block types:
View the canonical type definitions in the API reference.
Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code.Content blocks are not a replacement for the content property, but rather a new property that can be used to access the content of a message in a standardized format.
Chat models accept a sequence of message objects as input and return an AIMessage as output. Interactions are often stateless, so that a simple conversational loop involves invoking a model with a growing list of messages.Refer to the below guides to learn more: