Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the current LangChain Python or LangChain JavaScript docs.

Overview

Messages are the fundamental unit of context for models in LangChain. They represent the input and output of models, carrying both the content and metadata needed to represent the state of a conversation when interacting with an LLM. Messages are objects that contain:
  • Role - Identifies the message type (e.g. system, user)
  • Content - Represents the actual content of the message (like text, images, audio, documents, etc.)
  • Metadata - Optional fields such as response information, message IDs, and token usage
LangChain provides a standard message type that works across all model providers, ensuring consistent behavior regardless of the model being called.

Message types

Basic usage

The simplest way to use messages is to create message objects and pass them to a model when invoking.
import { initChatModel } from "langchain/chat_models";

const model = initChatModel("openai:gpt-5-nano");

const systemMsg = new SystemMessage("You are a helpful assistant.");
const humanMsg = new HumanMessage("Hello, how are you?");

const messages = [systemMsg, humanMsg];
const response = await model.invoke(messages);  // Returns AIMessage

Text prompts

Text prompts are strings - ideal for straightforward generation tasks where you don’t need to retain conversation history.
const response = await model.invoke("Write a haiku about spring");
Use text prompts when:
  • You have a single, standalone request
  • You don’t need conversation history
  • You want minimal code complexity

Message prompts

Alternatively, you can pass in a list of messages to the model by providing a list of message objects.
import { SystemMessage, HumanMessage, AIMessage } from "@langchain/core/messages";

const messages = [
    new SystemMessage("You are a poetry expert"),
    new HumanMessage("Write a haiku about spring"),
    new AIMessage("Cherry blossoms bloom...")
]
const response = await model.invoke(messages);
Use message prompts when:
  • Managing multi-turn conversations
  • Working with multimodal content (images, audio, files)
  • Including system instructions

/images/vendor/openai.svg Dictionary format

You can also specify messages directly in OpenAI chat completions format.
import { SystemMessage, HumanMessage, AIMessage } from "@langchain/core/messages";

const messages = [
    { role: "system", content: "You are a poetry expert" },
    { role: "user", content: "Write a haiku about spring" },
    { role: "assistant", content: "Cherry blossoms bloom..." }
]
const response = await model.invoke(messages);

Message types

System Message

A SystemMessage represent an initial set of instructions that primes the model’s behavior. You can use a system message to set the tone, define the model’s role, and establish guidelines for responses.
Basic instructions
import { SystemMessage, HumanMessage, AIMessage } from "@langchain/core/messages";

const systemMsg = new SystemMessage("You are a helpful coding assistant.");

const messages = [
    systemMsg,
    new HumanMessage("How do I create a REST API?")
]
const response = await model.invoke(messages);
Detailed persona
import { SystemMessage, HumanMessage } from "@langchain/core/messages";

const systemMsg = new SystemMessage(`
You are a senior TypeScript developer with expertise in web frameworks.
Always provide code examples and explain your reasoning.
Be concise but thorough in your explanations.
`);

const messages = [
    systemMsg,
    new HumanMessage("How do I create a REST API?")
]
const response = await model.invoke(messages);

Human Message

A HumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.

Text content

Message object
const humanMsg = new HumanMessage("What is machine learning?");
const response = await model.invoke([humanMsg]);
String shortcut
const response = await model.invoke("What is machine learning?");

Message metadata

const humanMsg = new HumanMessage({
    content: "Hello!",
    name: "alice",
    id: "msg_123"
});
The name field behavior varies by provider - some use it for user identification, others ignore it. To check, refer to the model provider’s reference.

AI Message

An AIMessage represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
const response = await model.invoke("Explain AI");
console.log(typeof response);  // AIMessage
AIMessage objects are returned by the model when calling it, which contains all of the associated metadata in the response. However, that doesn’t mean that’s the only place they can be created/ modified from. Providers weight/contextualize types of messages differently, which means it is sometimes helpful to create a new AIMessage object and insert it into the message history as if it came from the model.
import { AIMessage, SystemMessage, HumanMessage } from "@langchain/core/messages";

const aiMsg = new AIMessage("I'd be happy to help you with that question!");

const messages = [
    new SystemMessage("You are a helpful assistant"),
    new HumanMessage("Can you help me?"),
    aiMsg,  // Insert as if it came from the model
    new HumanMessage("Great! What's 2+2?")
]

const response = await model.invoke(messages);

AIMessage attributes

text
string
The text content of the message.
content
string | ContentBlock[]
The raw content of the message.
content_blocks
ContentBlock.Standard[]
The standardized content blocks of the message. (See Content)
tool_calls
ToolCall[] | None
The tool calls made by the model. Empty if no tools are called.
id
string
A unique identifier for the message (either automatically generated by LangChain or returned in the provider response)
usage_metadata
UsageMetadata | None
The usage metadata of the message, which can contain token counts when available.
response_metadata
ResponseMetadata | None
The response metadata of the message.

Tool calling responses

When models make tool calls, they’re included in the AI message:
const modelWithTools = model.bind_tools([getWeather]);
const response = await modelWithTools.invoke("What's the weather in Paris?");

for (const toolCall of response.tool_calls) {
    console.log(`Tool: ${toolCall.name}`);
    console.log(`Args: ${toolCall.args}`);
    console.log(`ID: ${toolCall.id}`);
}

Streaming and chunks

During streaming, you’ll receive AIMessageChunk objects that can be combined:
const chunks = [];
for await (const chunk of model.stream("Write a poem")) {
    chunks.push(chunk);
    console.log(chunk.text);
}

Tool Message

For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model.
import { AIMessage, ToolMessage } from "@langchain/core/messages";

const aiMessage = new AIMessage({
    content: [],
    tool_calls: [{
        name: "get_weather",
        args: { location: "San Francisco" },
        id: "call_123"
    }]
});

const toolMessage = new ToolMessage({
    content: "Sunny, 72°F",
    tool_call_id: "call_123"
});

const messages = [
    new HumanMessage("What's the weather in San Francisco?"),
    aiMessage,  // Model's tool call
    toolMessage,  // Tool execution result
];

const response = await model.invoke(messages);  // Model processes the result

ToolMessage attributes

content
string
required
The stringified output of the tool call.
tool_call_id
string
required
The ID of the tool call that this message is responding to. (this must match the ID of the tool call in the AI message)
name
string
required
The name of the tool that was called.
artifact
dict
Additional data not sent to the model but can be accessed programmatically.
The artifact field stores supplementary data that won’t be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model’s context.

Content

You can think of a message’s content as the actual payload of data that gets sent to the model. Within a message, you can content either as a string or a list of content blocks.
import { HumanMessage } from "@langchain/core/messages";

const humanMessage = new HumanMessage("Hello, how are you?");

const humanMessage = new HumanMessage({
    contentBlocks: [
        { type: "text", text: "Hello, how are you?" },
        { type: "image", url: "https://example.com/image.jpg" },
    ],
});
Each provider has their own opinionated format for representing message content. This makes it difficult to build applications that need to work across multiple AI providers, as each structure means you have to write custom code to handle each provider’s format.
// OpenAI format
const openaiMessage = {
    role: "user",
    content: [
        { type: "text", text: "What's in this image?" },
        { type: "image_url", image_url: { url: "..." } }
    ]
};

// Anthropic format
const anthropicMessage = {
    role: "user",
    content: [
        { type: "text", text: "What's in this image?" },
        { type: "image", source: { type: "url", media_type: "image/jpeg", url: "..." } }
    ]
};
By default, AIMessage objects will store the output of the model inside of the .content field. If you want to access content in a way that won’t change between providers, you can access the .contentBlocks field, which will return a list of content blocks that adhere to the standard content format.
Access content blocks
const message = await model.invoke("Why do parrots have different colors?");
const contentBlocks = message.contentBlocks;
Because each provider handles .content differently, you can also initialize a message with a list of content blocks. This will ensure that the message is always in the standard format, regardless of the provider.
const universalMessage = new HumanMessage({
    contentBlocks: [
        { type: "text", text: "What's in this image?" },
        { type: "image", url: "..." }
    ],
});

Examples

Multimodal message
const universalMessage = new HumanMessage({
    contentBlocks: [
        { type: "text", text: "Compare this image and document:" },
        { type: "image", url: "chart.jpg" },
        { type: "file", base64: pdfData, mimeType: "application/pdf" },
        { type: "text", text: "Which data source is more reliable?" }
    ],
});

const response = await model.invoke([universalMessage]);
PDF document analysis
import { readFileSync } from 'fs';
import { HumanMessage } from '@langchain/core/messages';

// Read and encode PDF
const pdfData = readFileSync("report.pdf");
const pdfBase64 = pdfData.toString('base64');

const message = new HumanMessage({
    content: [
        { type: "text", text: "Summarize the key findings in this report" },
        { type: "file", data: pdfBase64, mimeType: "application/pdf" }
    ]
});

const response = await model.invoke([message]);
Audio transcription
import { readFileSync } from 'fs';
import { HumanMessage } from '@langchain/core/messages';

// Read and encode audio file
const audioData = readFileSync("meeting.mp3");
const audioBase64 = audioData.toString('base64');

const message = new HumanMessage({
    content: [
        { type: "text", text: "Transcribe this audio and identify the main topics discussed" },
        { type: "audio", data: audioBase64, mimeType: "audio/mpeg" }
    ]
});

const response = await model.invoke([message]);
Video analysis
import { readFileSync } from 'fs';
import { HumanMessage } from '@langchain/core/messages';

// Read and encode video file
const videoData = readFileSync("demo.mp4");
const videoBase64 = videoData.toString('base64');

const message = new HumanMessage({
    content: [
        { type: "text", text: "Describe what happens in this video" },
        { type: "video", data: videoBase64, mimeType: "video/mp4" }
    ]
});

const response = await model.invoke([message]);
Not all models support all file types. Check the model provider’s reference for supported formats and size limits.

Content block reference

Content blocks are represented (either when creating a message or accessing the contentBlocks field) as a list of typed dictionaries. Each item in the list must adhere to one of the following block types:
Each of these content blocks mentioned above are indvidually addressable as types when importing the ContentBlock type.
import { ContentBlock } from "@langchain/core/messages";

// Text block
const textBlock: ContentBlock.Text = {
    type: "text",
    text: "Hello world",
}

// Image block
const imageBlock: ContentBlock.Multimodal.Image = {
    type: "image",
    url: "https://example.com/image.png",
    mimeType: "image/png",
}

Examples

Multi-turn conversations

Building conversational applications requires managing message history and context:
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

// Initialize conversation
const messages = [
    new SystemMessage("You are a helpful assistant specializing in Python programming")
]

// Simulate multi-turn conversation
while (true) {
    const userInput = await getNextMessage();
    if (userInput.toLowerCase() === "quit") {
        break;
    }

    // Add user message
    messages.push(new HumanMessage(userInput));
    
    // Get model response
    const response = await model.invoke(messages);
    
    // Add assistant response to history
    messages.push(response);
    
    console.log(`Assistant: ${response.content}`);
}

Message output versions: v0 vs v1

Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code. Content blocks are not a replacement for the content property, but rather a new property that can be used to access the content of a message in a standardized format. .contentBlocks is a computed property that outputs standard blocks based on .content. Because of this content blocks will not be included in the output when a message gets stringified/serialized by default. If you want to serialize a message that uses standard content blocks (e.g. if you were passing messages to a client), there’s a couple of different options to instruct the model to output a message using standard content blocks:
# Default v0 (backward compatible)
const model = new ChatOpenAI({ model: "gpt-4o" })

# Explicit v1
const model = new ChatOpenAI({ model: "gpt-4o", output_version: "v1" })
When instructed, the model will output a message with .content as a list of standard content blocks (the same as if you were accessing .contentBlocks).
const model = initChatModel(model="", provider="", output_version="v1")
const response = await model.invoke("Analyze the sales data and show your reasoning")

response.content  // List of ContentBlock objects
response.contentBlocks  // Same as above, but typed
Models can interpret both v0 and v1 content formats, which means that you can use the same model instance to invoke messages that might have different content formats.
Aspectv0v1 (Standard)
Default behaviorYesAccessible through .content_blocks
Serializationcontent is raw model outputcontent is a list of ContentBlock objects
Content formatProvider-specificStandardized ContentBlock types
Type safetyLimitedFully typed
Multimodal supportBasicComprehensive