Ollama allows you to run open-source large language models,
such as gpt-oss, locally.
Ollama
bundles model weights, configuration, and data into a single package, defined by a Modelfile.
It optimizes setup and configuration details, including GPU usage.
For a complete list of supported models and model variants, see the Ollama model library.
See this guide for more details
on how to use ollama
with LangChain.
ollama pull <name-of-model>
to download a model from the Ollama model library:
ollama list
langchain-ollama
partner package and run a model.
BaseChatModel.bind_tools()
methods
as described here.
Make sure to select an ollama model that supports tool calling.