This will help you get started with DeepSeek’s hosted chat models. For detailed documentation of all ChatDeepSeek features and configurations head to the API reference.
DeepSeek’s models are open source and can be run locally (e.g. in Ollama) or on other inference providers (e.g. Fireworks, Together) as well.

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatDeepSeeklangchain-deepseekbetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
DeepSeek-R1, specified via model="deepseek-reasoner", does not support tool calling or structured output. Those features are supported by DeepSeek-V3 (specified via model="deepseek-chat").

Setup

To access DeepSeek models you’ll need to create a/an DeepSeek account, get an API key, and install the langchain-deepseek integration package.

Credentials

Head to DeepSeek’s API Key page to sign up to DeepSeek and generate an API key. Once you’ve done this set the DEEPSEEK_API_KEY environment variable:
import getpass
import os

if not os.getenv("DEEPSEEK_API_KEY"):
    os.environ["DEEPSEEK_API_KEY"] = getpass.getpass("Enter your DeepSeek API key: ")
To enable automated tracing of your model calls, set your LangSmith API key:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

The LangChain DeepSeek integration lives in the langchain-deepseek package:
%pip install -qU langchain-deepseek

Instantiation

Now we can instantiate our model object and generate chat completions:
from langchain_deepseek import ChatDeepSeek

llm = ChatDeepSeek(
    model="deepseek-chat",
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # other params...
)

Invocation

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg.content

Chaining

We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
    [
        (
            "system",
            "You are a helpful assistant that translates {input_language} to {output_language}.",
        ),
        ("human", "{input}"),
    ]
)

chain = prompt | llm
chain.invoke(
    {
        "input_language": "English",
        "output_language": "German",
        "input": "I love programming.",
    }
)

API reference

For detailed documentation of all ChatDeepSeek features and configurations head to the API Reference.