Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
Messages are the fundamental unit of context for models in LangChain. They represent the input and output of models, carrying both the content and metadata needed to represent the state of a conversation when interacting with an LLM. Messages are objects that contain:
  • Role - Identifies the message type (e.g. system, user)
  • Content - Represents the actual content of the message (like text, images, audio, documents, etc.)
  • Metadata - Optional fields such as response information, message IDs, and token usage
LangChain provides a standard message type that works across all model providers, ensuring consistent behavior regardless of the model being called.

Basic usage

The simplest way to use messages is to create message objects and pass them to a model when invoking.
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

model = init_chat_model("openai:gpt-5-nano")

system_msg = SystemMessage("You are a helpful assistant.")
human_msg = HumanMessage("Hello, how are you?")

# Use with chat models
messages = [system_msg, human_msg]
response = model.invoke(messages)  # Returns AIMessage

Text prompts

Text prompts are strings - ideal for straightforward generation tasks where you don’t need to retain conversation history.
response = model.invoke("Write a haiku about spring")
Use text prompts when:
  • You have a single, standalone request
  • You don’t need conversation history
  • You want minimal code complexity

Message prompts

Alternatively, you can pass in a list of messages to the model by providing a list of message objects.
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

messages = [
    SystemMessage("You are a poetry expert"),
    HumanMessage("Write a haiku about spring"),
    AIMessage("Cherry blossoms bloom...")
]
response = model.invoke(messages)
Use message prompts when:
  • Managing multi-turn conversations
  • Working with multimodal content (images, audio, files)
  • Including system instructions

Dictionary format

You can also specify messages directly in OpenAI chat completions format.
messages = [
    {"role": "system", "content": "You are a poetry expert"},
    {"role": "user", "content": "Write a haiku about spring"},
    {"role": "assistant", "content": "Cherry blossoms bloom..."}
]
response = model.invoke(messages)

Message types

  • System message - Tells the model how to behave and provide context for interactions
  • Human message - Represents user input and interactions with the model
  • AI message - Responses generated by the model, including text content, tool calls, and metadata
  • Tool message - Represents the outputs of tool calls

System Message

A SystemMessage represent an initial set of instructions that primes the model’s behavior. You can use a system message to set the tone, define the model’s role, and establish guidelines for responses.
Basic instructions
system_msg = SystemMessage("You are a helpful coding assistant.")

messages = [
    system_msg,
    HumanMessage("How do I create a REST API?")
]
response = model.invoke(messages)
Detailed persona
from langchain_core.messages import SystemMessage, HumanMessage

system_msg = SystemMessage("""
You are a senior Python developer with expertise in web frameworks.
Always provide code examples and explain your reasoning.
Be concise but thorough in your explanations.
""")

messages = [
    system_msg,
    HumanMessage("How do I create a REST API?")
]
response = model.invoke(messages)

Human Message

A HumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.

Text content

human_msg = HumanMessage("What is machine learning?")
response = model.invoke([human_msg])

Message metadata

Add metadata
human_msg = HumanMessage(
    content="Hello!",
    name="alice",  # Optional: identify different users
    id="msg_123",  # Optional: unique identifier for tracing
)
The name field behavior varies by provider - some use it for user identification, others ignore it. To check, refer to the model provider’s reference.

AI Message

An AIMessage represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
response = model.invoke("Explain AI")
print(type(response))  # <class 'langchain_core.messages.AIMessage'>
AIMessage objects are returned by the model when calling it, which contains all of the associated metadata in the response. However, that doesn’t mean that’s the only place they can be created/ modified from. Providers weight/contextualize types of messages differently, which means it is sometimes helpful to create a new AIMessage object and insert it into the message history as if it came from the model.
from langchain_core.messages import AIMessage, SystemMessage, HumanMessage

# Create an AI message manually (e.g., for conversation history)
ai_msg = AIMessage("I'd be happy to help you with that question!")

# Add to conversation history
messages = [
    SystemMessage("You are a helpful assistant"),
    HumanMessage("Can you help me?"),
    ai_msg,  # Insert as if it came from the model
    HumanMessage("Great! What's 2+2?")
]

response = model.invoke(messages)
text
string
The text content of the message.
content
string | dict[]
The raw content of the message.
content_blocks
ContentBlock[]
The standardized content blocks of the message.
tool_calls
dict[] | None
The tool calls made by the model. Empty if no tools are called.
id
string
A unique identifier for the message (either automatically generated by LangChain or returned in the provider response)
usage_metadata
dict | None
The usage metadata of the message, which can contain token counts when available.
response_metadata
ResponseMetadata | None
The response metadata of the message.

Tool calling responses

When models make tool calls, they’re included in the AI message:
Tool calling
model_with_tools = model.bind_tools([GetWeather])
response = model_with_tools.invoke("What's the weather in Paris?")

for tool_call in response.tool_calls:
    print(f"Tool: {tool_call['name']}")
    print(f"Args: {tool_call['args']}")
    print(f"ID: {tool_call['id']}")

Streaming and chunks

During streaming, you’ll receive AIMessageChunk objects that can be combined into a full message:
chunks = []
full_message = None
for chunk in model.stream("Hi"):
    chunks.append(chunk)
    print(chunk.text)
    full_message = chunk if full_message is None else full_message + chunk

Tool Message

For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model.
# After a model makes a tool call
ai_message = AIMessage(
    content=[],
    tool_calls=[{
        "name": "get_weather",
        "args": {"location": "San Francisco"},
        "id": "call_123"
    }]
)

# Execute tool and create result message
weather_result = "Sunny, 72°F"
tool_message = ToolMessage(
    content=weather_result,
    tool_call_id="call_123"  # Must match the call ID
)

# Continue conversation
messages = [
    HumanMessage("What's the weather in San Francisco?"),
    ai_message,  # Model's tool call
    tool_message,  # Tool execution result
]
response = model.invoke(messages)  # Model processes the result
content
string
required
The stringified output of the tool call.
tool_call_id
string
required
The ID of the tool call that this message is responding to. (this must match the ID of the tool call in the AI message)
name
string
required
The name of the tool that was called.
artifact
dict
Additional data not sent to the model but can be accessed programmatically.
The artifact field stores supplementary data that won’t be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model’s context.
from langchain_core.messages import ToolMessage
import json

# Tool execution returns structured data
raw_data = {"temperature": 72, "condition": "Sunny"}
tool_message = ToolMessage(
    content="The weather is sunny with a temperature of 72°F.",
    tool_call_id="call_123",
    name="get_weather",
    artifact={"raw_data": raw_data}  # Store structured data
)

# Later, access the raw data programmatically
weather_info = tool_message.artifact.get("raw_data")
print(f"Raw weather data: {json.dumps(weather_info)}")

Content

You can think of a message’s content as the payload of data that gets sent to the model. Messages have a content attribute that is loosely-typed, supporting strings and lists of untyped objects (e.g., dictionaries). This allows support for provider-native structures directly in LangChain chat models, such as multimodal content and other data. Separately, LangChain provides dedicated content types for text, reasoning, citations, multi-modal data, server-side tool calls, and other message content. See content blocks below. LangChain chat models accept message content in the .content attribute, and can contain:
  1. A string
  2. A list of content blocks in a provider-native format
  3. A list of LangChain’s standard content blocks
See below for an example using multimodal inputs:
from langchain_core.messages import HumanMessage

# String content
human_message = HumanMessage("Hello, how are you?")

# Provider-native format (e.g., OpenAI)
human_message = HumanMessage(content=[
    {"type": "text", "text": "Hello, how are you?"},
    {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
])

# List of standard content blocks
human_message = HumanMessage(content_blocks=[
    {"type": "text", "text": "Hello, how are you?"},
    {"type": "image", "url": "https://example.com/image.jpg"},
])
Specifying content_blocks when initializing a message will still populate message content, but provides a type-safe interface for doing so.

Standard content blocks

LangChain maintains a standard set of types for message content that works across providers (see the reference section below). Messages also implement a content_blocks property that will lazily parse the content attribute into this standard, type-safe representation. For example, messages generated from ChatAnthropic or ChatOpenAI will include thinking or reasoning blocks in the format of the respective provider, but these can be lazily parsed into a consistent ReasoningContentBlock representation:
from langchain_core.messages import AIMessage

message = AIMessage(
    content=[
        {"type": "thinking", "thinking": "...", "signature": "WaUjzkyp..."},
        {"type": "text", "text": "..."},
    ],
    response_metadata={"model_provider": "anthropic"}
)
message.content_blocks
[{'type': 'reasoning',
  'reasoning': '...',
  'extras': {'signature': 'WaUjzkyp...'}},
 {'type': 'text', 'text': '...'}]
See the integrations guides to get started with the inference provider of your choice.
Serializing standard contentIf an application outside of LangChain needs access to the standard content block representation, you can opt-in to storing content blocks in message content.To do this, you can set the LC_OUTPUT_VERSION environment variable to v1. Or, initialize any chat model with output_version="v1":
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-5-nano", output_version="v1")

Multimodal

Multimodality refers to the ability to work with data that comes in different forms, such as text, audio, images, and video. LangChain includes standard types for these data that can be used across providers. Chat models can accept multimodal data as input and generate it as output. Below we show short examples of input messages featuring multimodal data:
# From URL
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe the content of this image."},
        {"type": "image", "url": "https://example.com/path/to/image.jpg"},
    ]
}

# From base64 data
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe the content of this image."},
        {
            "type": "image",
            "base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
            "mime_type": "image/jpeg",
        },
    ]
}

# From provider-managed File ID
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe the content of this image."},
        {"type": "image", "file_id": "file-abc123"},
    ]
}
Not all models support all file types. Check the model provider’s reference for supported formats and size limits.

Content block reference

Content blocks are represented (either when creating a message or accessing the content_blocks property) as a list of typed dictionaries. Each item in the list must adhere to one of the following block types:
Purpose: Standard text output
type
string
required
Always "text"
text
string
required
The text content
annotations
object[]
List of annotations for the text
extras
object
Additional provider-specific data
Example:
{
    "type": "text",
    "text": "Hello world",
    "annotations": []
}
Purpose: Model reasoning steps
type
string
required
Always "reasoning"
reasoning
string
The reasoning content
extras
object
Additional provider-specific data
Example:
{
    "type": "reasoning",
    "reasoning": "The user is asking about...",
    "extras": {"signature": "abc123"},
}
Purpose: Image data
type
string
required
Always "image"
url
string
URL pointing to the image location.
base64
string
Base64-encoded image data.
id
string
Reference ID to an externally stored image (e.g., in a provider’s file system or in a bucket).
mime_type
string
Image MIME type (e.g., image/jpeg, image/png)
Purpose: Audio data
type
string
required
Always "audio"
url
string
URL pointing to the audio location.
data
string
Base64-encoded audio data.
id
string
Reference ID to an externally stored audio file (e.g., in a provider’s file system or in a bucket).
mime_type
string
Audio MIME type (e.g., audio/mpeg, audio/wav)
Purpose: Video data
type
string
required
Always "video"
url
string
URL pointing to the video location.
data
string
Base64-encoded video data.
id
string
Reference ID to an externally stored video file (e.g., in a provider’s file system or in a bucket).
mime_type
string
Video MIME type (e.g., video/mp4, video/webm)
Purpose: Generic files (PDF, etc)
type
string
required
Always "file"
url
string
URL pointing to the file location.
data
string
Base64-encoded file data.
id
string
Reference ID to an externally stored file (e.g., in a provider’s file system or in a bucket).
mime_type
string
File MIME type (e.g., application/pdf)
Purpose: Document text (.txt, .md)
type
string
required
Always "text-plain"
text
string
The text content
mime_type
string
MIME type of the text (e.g., text/plain, text/markdown)
Purpose: Function calls
type
string
required
Always "tool_call"
name
string
required
Name of the tool to call
args
object
required
Arguments to pass to the tool
id
string
required
Unique identifier for this tool call
Example:
{
    "type": "tool_call",
    "name": "search",
    "args": {"query": "weather"},
    "id": "call_123"
}
Purpose: Streaming tool fragments
type
string
required
Always "tool_call_chunk"
name
string
Name of the tool being called
args
string
Partial tool arguments (may be incomplete JSON)
id
string
Tool call identifier
index
number
Position of this chunk in the stream
Purpose: Malformed calls, intended to catch JSON parsing errors.
type
string
required
Always "invalid_tool_call"
name
string
Name of the tool that failed to be called
args
string
Raw arguments that failed to parse
error
string
Description of what went wrong
Purpose: Built-in web search
type
string
required
Always "web_search_call"
query
string
The search query to execute
Purpose: Search results
type
string
required
Always "web_search_result"
urls
string[]
URLs of the search results
Returns: Top search results with associated URLs.
Purpose: Code execution
type
string
required
Always "code_interpreter_call"
language
string
Programming language to execute (e.g. python, javascript, sql)
code
string
Code to execute
Purpose: Execution results
type
string
required
Always "code_interpreter_result"
output
CodeInterpreterOutput[]
Output from the code execution
Purpose: Provider-specific escape hatch
type
string
required
Always "non_standard"
value
object
required
Provider-specific data structure
Usage: For experimental or provider-unique features
Additional provider-specific content types may be found within the reference documentation of each model provider.
Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code. Content blocks are not a replacement for the content property, but rather a new property that can be used to access the content of a message in a standardized format.

Examples

Multi-turn conversations

Building conversational applications requires managing message history and context:
from langchain_core.messages import HumanMessage, AIMessage

# Initialize conversation
messages = [
    SystemMessage("You are a helpful assistant specializing in Python programming")
]

# Simulate multi-turn conversation
while True:
    user_input = input("You: ")
    if user_input.lower() == "quit":
        break

    # Add user message
    messages.append(HumanMessage(user_input))

    # Get model response
    response = model.invoke(messages)

    # Add assistant response to history
    messages.append(response)

    print(f"Assistant: {response.content}")