This will help you get started with Qwen chat models. For detailed documentation of all ChatQwen features and configurations head to the API reference.

Overview

Integration details

ClassPackageLocalSerializableDownloadsVersion
ChatQwenlangchain-qwqbetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access Qwen models you’ll need to create an Alibaba Cloud account, get an API key, and install the langchain-qwq integration package.

Credentials

Head to Alibaba’s API Key page to sign up to Alibaba Cloud and generate an API key. Once you’ve done this set the DASHSCOPE_API_KEY environment variable:
import getpass
import os

if not os.getenv("DASHSCOPE_API_KEY"):
    os.environ["DASHSCOPE_API_KEY"] = getpass.getpass("Enter your Dashscope API key: ")

Installation

The LangChain Qwen integration lives in the langchain-qwq package:
%pip install -qU langchain-qwq

Instantiation

Now we can instantiate our model object and generate chat completions:
from langchain_qwq import ChatQwen

llm = ChatQwen(
    model="qwen-flash",
    max_tokens=3_000,
    timeout=None,
    max_retries=2,
    # other params...
)

Invocation

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French."
        "Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--40f2e75b-7d28-4a71-8f5f-561509ac2010-0', usage_metadata={'input_tokens': 32, 'output_tokens': 8, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})

Chaining

We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
    [
        (
            "system",
            "You are a helpful assistant that translates"
            "{input_language} to {output_language}.",
        ),
        ("human", "{input}"),
    ]
)

chain = prompt | llm
chain.invoke(
    {
        "input_language": "English",
        "output_language": "German",
        "input": "I love programming.",
    }
)
AIMessage(content='Ich liebe Programmierung.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--ed052510-82f4-4dc5-a7f1-761a792c0dd4-0', usage_metadata={'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33, 'input_token_details': {}, 'output_token_details': {}})

Tool Calling

ChatQwen supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.

Use with bind_tools

from langchain_core.tools import tool
from langchain_qwq import ChatQwen


@tool
def multiply(first_int: int, second_int: int) -> int:
    """Multiply two integers together."""
    return first_int * second_int


llm = ChatQwen(model="qwen-flash")

llm_with_tools = llm.bind_tools([multiply])

msg = llm_with_tools.invoke("What's 5 times forty two")

print(msg)
content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_f0c2cc49307f480db78a45', 'function': {'arguments': '{"first_int": 5, "second_int": 42}', 'name': 'multiply'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwen-flash'} id='run--27c5aafb-9710-42f5-ab78-5a2ad1d9050e-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_f0c2cc49307f480db78a45', 'type': 'tool_call'}] usage_metadata={'input_tokens': 166, 'output_tokens': 27, 'total_tokens': 193, 'input_token_details': {}, 'output_token_details': {}}

Vision Support

Image

from langchain_core.messages import HumanMessage

model = ChatQwen(model="qwen-vl-max-latest")

messages = [
    HumanMessage(content=[
        {
            "type": "image_url",
            "image_url": {
                "url": "https://example.com/image/image.png"
            },
        },
        {"type": "text", "text": "What do you see in this image?"},
    ])
]

response = model.invoke(messages)
print(response)
content='In this image, I see a heartwarming scene on a sandy beach during what appears to be either sunrise or sunset, given the warm, soft golden light. A woman is sitting on the sand, smiling warmly as she gently holds the paw of a large, light-colored dog—likely a Labrador Retriever. The dog is sitting upright, facing her, and seems to be engaging in a friendly handshake or high-five gesture with her. \n\nThe woman is wearing a plaid shirt and dark pants, and her long hair flows loosely. She looks happy and relaxed, enjoying the moment with her pet. The dog is wearing a colorful harness with patterns, and a leash lies loosely on the sand nearby. \n\nIn the background, gentle waves roll onto the shore, and the horizon blends softly into the sky, creating a serene and peaceful atmosphere. The overall mood of the image is joyful, affectionate, and tranquil, capturing a special bond between the woman and her dog in a beautiful natural setting.' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-vl-max-latest'} id='run--22fecc27-9455-4426-bbd4-aa5a5bde400c-0' usage_metadata={'input_tokens': 1265, 'output_tokens': 202, 'total_tokens': 1467, 'input_token_details': {}, 'output_token_details': {}}

Video

from langchain_core.messages import HumanMessage

model = ChatQwen(model="qwen-vl-max-latest")

messages = [
    HumanMessage(content=[
        {
            "type": "video_url",
            "video_url": {
                "url": "https://example.com/video/1.mp4"
            },
        },
        {"type": "text", "text": "Can you tell me about this video?"},
    ])
]

response = model.invoke(messages)
print(response)
content='This video features a young woman with short, neatly styled brown hair and bangs. She is wearing a soft pink knitted cardigan over a white top, accessorized with a delicate necklace. The background is softly blurred, suggesting an outdoor urban setting with modern buildings, possibly during daytime with natural lighting.\n\nThroughout the video, the woman maintains a warm and friendly demeanor, smiling gently and occasionally laughing. Her expressions are animated and cheerful, conveying a sense of happiness and approachability. The lighting highlights her features beautifully, giving the scene a pleasant and inviting atmosphere.\n\nThe watermark in the top-right corner indicates that this video was created using AI synthesis technology ("通义·AI合成"), suggesting that the content may be generated or enhanced by artificial intelligence. This could mean the visuals are highly polished and stylized, typical of AI-generated media.\n\nOverall, the video appears to be a short, positive, and uplifting clip, possibly intended for use in promotional content, social media, or as part of a digital character presentation.' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-vl-max-latest'} id='run--c567a6e2-14e0-47e1-acb6-cfe2953fb1ad-0' usage_metadata={'input_tokens': 3607, 'output_tokens': 204, 'total_tokens': 3811, 'input_token_details': {}, 'output_token_details': {}}

API reference

For detailed documentation of all ChatQwen features and configurations head to the API reference