ChatOllama
chat models. For detailed documentation of all ChatOllama
features and configurations head to the API reference.
Overview
Integration details
Ollama allows you to use a wide range of models with different capabilities. Some of the fields in the details table below only apply to a subset of models that Ollama offers. For a complete list of supported models and model variants, see the Ollama model library and search by tag.Class | Package | Local | Serializable | PY support | Downloads | Version |
---|---|---|---|---|---|---|
ChatOllama | @langchain/ollama | ✅ | beta | ✅ |
Model features
See the links in the table headers below for guides on how to use specific features.Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ |
Setup
Follow these instructions to set up and run a local Ollama instance. Then, download the@langchain/ollama
package.
Credentials
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:Installation
The LangChain ChatOllama integration lives in the@langchain/ollama
package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Chaining
We can chain our model with a prompt template like so:Tools
Ollama now offers support for native tool calling for a subset of their available models. The example below demonstrates how you can invoke a tool from an Ollama model..withStructuredOutput
For models that support tool calling, you can also call .withStructuredOutput()
to get a structured output from the tool.
JSON mode
Ollama also supports a JSON mode for all chat models that coerces model outputs to only return JSON. Here’s an example of how this can be useful for extraction:Multimodal models
Ollama supports open source multimodal models like LLaVA in versions 0.1.15 and up. You can pass images as part of a message’scontent
field to multimodal-capable models like this: