Baseten is a provider of all the infrastructure you need to deploy and serve ML models performantly, reliably, and scalably.
As a model inference platform,Baseten
is aProvider
in the LangChain ecosystem. TheBaseten
integration currently implementsChat Models
andEmbeddings
components.
Baseten
lets you access both open source models like Kimi K2 or GPT OSS on model APIs by specifying amodel
slug and run proprietary or fine-tuned models on dedicated GPUs through dedicated deployments by specifying amodel_url
.
Installation and Setup
You’ll need two things to use Baseten models with LangChain:- A Baseten account
- An API key
BASETEN_API_KEY
.
Chat Models (Model APIs and Dedicated Deployments)
See a usage example.Embeddings (Dedicated Deployments Only)
See a usage example.Connect these docs to Claude, VSCode, and more via MCP for real-time answers. See how