IBM watsonx.ai
features and configuration options, please refer to the IBM watsonx.ai.
Overview
Integration details
Class | Package | Local | Serializable | PY support | Downloads | Version |
---|---|---|---|---|---|---|
WatsonxLLM | @langchain/community | ❌ | ✅ | ✅ |
Setup
To access IBM WatsonxAI models you’ll need to create an IBM watsonx.ai account, get an API key or any other type of credentials, and install the@langchain/community
integration package.
Credentials
Head to IBM Cloud to sign up to IBM watsonx.ai and generate an API key or provide any other authentication form as presented below.IAM authentication
Bearer token authentication
IBM watsonx.ai software authentication
IAM authentication
Bearer token authentication
IBM watsonx.ai software authentication
Installation
The LangChain IBM watsonx.ai integration lives in the@langchain/community
package:
Instantiation
Now we can instantiate our model object and generate chat completions:- You must provide
spaceId
,projectId
oridOrName
(deployment id) unless you use lighweight engine which works without specifying either (refer to watsonx.ai docs) - Depending on the region of your provisioned service instance, use correct serviceUrl.
- You need to specify the model you want to use for inferencing through model_id.
Invocation and generation
Chaining
We can chain our completion model with a prompt template like so:Props overwriting
Passed props at initialization will last for the whole life cycle of the object, however you may overwrite them for a single method’s call by passing second argument as belowTokenization
This package has it’s custom getNumTokens implementation which returns exact amount of tokens that would be used.API reference
For detailed documentation of allIBM watsonx.ai
features and configurations head to the API reference: API docs