Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
- Role - Identifies the message type (e.g.
system
,user
) - Content - Represents the actual content of the message (like text, images, audio, documents, etc.)
- Metadata - Optional fields such as response information, message IDs, and token usage
Basic usage
The simplest way to use messages is to create message objects and pass them to a model when invoking.Text prompts
Text prompts are strings - ideal for straightforward generation tasks where you don’t need to retain conversation history.- You have a single, standalone request
- You don’t need conversation history
- You want minimal code complexity
Message prompts
Alternatively, you can pass in a list of messages to the model by providing a list of message objects.- Managing multi-turn conversations
- Working with multimodal content (images, audio, files)
- Including system instructions
Dictionary format
You can also specify messages directly in OpenAI chat completions format.Message types
- System message - Tells the model how to behave and provide context for interactions
- Human message - Represents user input and interactions with the model
- AI message - Responses generated by the model, including text content, tool calls, and metadata
- Tool message - Represents the outputs of tool calls
System Message
ASystemMessage
represent an initial set of instructions that primes the model’s behavior. You can use a system message to set the tone, define the model’s role, and establish guidelines for responses.
Basic instructions
Detailed persona
Human Message
AHumanMessage
represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.
Text content
Message metadata
Add metadata
The
name
field behavior varies by provider - some use it for user identification, others ignore it. To check, refer to the model provider’s reference.AI Message
AnAIMessage
represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
AIMessage
objects are returned by the model when calling it, which contains all of the associated metadata in the response. However, that doesn’t mean that’s the only place they can be created/ modified from.
Providers weight/contextualize types of messages differently, which means it is sometimes helpful to create a new AIMessage
object and insert it into the message history as if it came from the model.
Attributes
Attributes
The text content of the message.
The raw content of the message.
The standardized content blocks of the message.
The tool calls made by the model. Empty if no tools are called.
A unique identifier for the message (either automatically generated by LangChain or returned in the provider response)
The usage metadata of the message, which can contain token counts when available.
The response metadata of the message.
Tool calling responses
When models make tool calls, they’re included in the AI message:Tool calling
Streaming and chunks
During streaming, you’ll receiveAIMessageChunk
objects that can be combined into a full message:
Tool Message
For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model.Attributes
Attributes
The stringified output of the tool call.
The ID of the tool call that this message is responding to. (this must match the ID of the tool call in the AI message)
The name of the tool that was called.
Additional data not sent to the model but can be accessed programmatically.
The
artifact
field stores supplementary data that won’t be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model’s context.Example: Using artifact for raw data
Example: Using artifact for raw data
Content
You can think of a message’s content as the payload of data that gets sent to the model. Messages have acontent
attribute that is loosely-typed, supporting strings and lists of untyped objects (e.g., dictionaries). This allows support for provider-native structures directly in LangChain chat models, such as multimodal content and other data.
Separately, LangChain provides dedicated content types for text, reasoning, citations, multi-modal data, server-side tool calls, and other message content. See content blocks below.
LangChain chat models accept message content in the .content
attribute, and can contain:
- A string
- A list of content blocks in a provider-native format
- A list of LangChain’s standard content blocks
Specifying
content_blocks
when initializing a message will still populate message
content
, but provides a type-safe interface for doing so.Standard content blocks
LangChain maintains a standard set of types for message content that works across providers (see the reference section below). Messages also implement acontent_blocks
property that will lazily parse the content
attribute into this standard, type-safe representation. For example, messages generated from ChatAnthropic or ChatOpenAI will include thinking
or reasoning
blocks in the format of the respective provider, but these can be lazily parsed into a consistent ReasoningContentBlock
representation:
Serializing standard contentIf an application outside of LangChain needs access to the standard content block
representation, you can opt-in to storing content blocks in message content.To do this, you can set the
LC_OUTPUT_VERSION
environment variable to v1
. Or,
initialize any chat model with output_version="v1"
:Multimodal
Multimodality refers to the ability to work with data that comes in different forms, such as text, audio, images, and video. LangChain includes standard types for these data that can be used across providers. Chat models can accept multimodal data as input and generate it as output. Below we show short examples of input messages featuring multimodal data:Not all models support all file types. Check the model provider’s reference for supported formats and size limits.
Content block reference
Content blocks are represented (either when creating a message or accessing thecontent_blocks
property) as a list of typed dictionaries. Each item in the list must adhere to one of the following block types:
Core
Core
TextContentBlock
TextContentBlock
Multimodal
Multimodal
ImageContentBlock
ImageContentBlock
Purpose: Image data
Always
"image"
URL pointing to the image location.
Base64-encoded image data.
Reference ID to an externally stored image (e.g., in a provider’s file system or in a bucket).
AudioContentBlock
AudioContentBlock
Purpose: Audio data
Always
"audio"
URL pointing to the audio location.
Base64-encoded audio data.
Reference ID to an externally stored audio file (e.g., in a provider’s file system or in a bucket).
VideoContentBlock
VideoContentBlock
Purpose: Video data
Always
"video"
URL pointing to the video location.
Base64-encoded video data.
Reference ID to an externally stored video file (e.g., in a provider’s file system or in a bucket).
FileContentBlock
FileContentBlock
Purpose: Generic files (PDF, etc)
Always
"file"
URL pointing to the file location.
Base64-encoded file data.
Reference ID to an externally stored file (e.g., in a provider’s file system or in a bucket).
Tool Calling
Tool Calling
ToolCall
ToolCall
ToolCallChunk
ToolCallChunk
Server-Side Tool Execution
Server-Side Tool Execution
WebSearchCall
WebSearchCall
WebSearchResult
WebSearchResult
CodeInterpreterCall
CodeInterpreterCall
Provider-Specific Blocks
Provider-Specific Blocks
NonStandardContentBlock
NonStandardContentBlock
Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code. Content blocks are not a replacement for the
content
property, but rather a new property that can be used to access the content of a message in a standardized format.