MistralAI
features and configuration options, please refer to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | PY support | Downloads | Version |
---|---|---|---|---|---|---|
MistralAI | @langchain/mistralai | ❌ | ✅ | ❌ |
Setup
To access MistralAI models you’ll need to create a MistralAI account, get an API key, and install the@langchain/mistralai
integration package.
Credentials
Head to console.mistral.ai to sign up to MistralAI and generate an API key. Once you’ve done this set theMISTRAL_API_KEY
environment variable:
Installation
The LangChain MistralAI integration lives in the@langchain/mistralai
package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Chaining
We can chain our completion model with a prompt template like so:suffix
to the prompt. Suffixes can be passed via the call options when invoking a model like so:
console.log('hello world')
code snippet, but also included extra unwanted text. By adding a suffix, we can constrain the model to only complete the prompt up to the suffix (in this case, three backticks). This allows us to easily parse the completion and extract only the desired response without the suffix using a custom output parser.