ChatWatsonx is a wrapper for IBM watsonx.ai foundation models.The aim of these examples is to show how to communicate with
watsonx.ai
models using LangChain
LLMs API.
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatWatsonx | langchain-ibm | ❌ | ❌ | ✅ |
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ |
langchain-ibm
integration package.
langchain-ibm
package:
parameters
for different models or tasks. For details, refer to Available TextChatParameters.
WatsonxLLM
class with the previously set parameters.
Note:
project_id
or space_id
. To get your project or space ID, open your project or space, go to the Manage tab, and click General. For more information see: Project documentation or Deployment space documentation.project_id
and Dallas URL.
You need to specify the model_id
that will be used for inferencing. You can find the list of all the available models in Supported chat models.
model_id
, you can also pass the deployment_id
of the previously deployed model with reference to a Prompt Template.
APIClient
object into the ChatWatsonx
class.
ChatPromptTemplate
objects which will be responsible for creating a random question.
tool_calls
attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
ChatWatsonx
features and configurations head to the API reference.