Overview
Integration details
| Class | Package | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|
ChatGroq | langchain-groq | beta | ✅ |
Model features
| Tool calling | Structured output | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
Setup
To access Groq models you’ll need to create a Groq account, get an API key, and install thelangchain-groq integration package.
Credentials
Head to the Groq console to sign up to Groq and generate an API key. Once you’ve done this set the GROQ_API_KEY environment variable:Installation
The LangChain Groq integration lives in thelangchain-groq package:
Instantiation
Now we can instantiate our model object and generate chat completions.Reasoning FormatIf you choose to set a
reasoning_format, you must ensure that the model you are using supports it. You can find a list of supported models in the Groq documentation.Invocation
Vision
Groq supports vision capabilities with select models, allowing you to send images along with text prompts.Vision-capable models
meta-llama/llama-4-scout-17b-16e-instructmeta-llama/llama-4-maverick-17b-128e-instruct