gemini-1.5-pro
, gemini-1.5-flash
, etc.
This will help you get started with VertexAI completion models (LLMs) using LangChain. For detailed documentation on VertexAI
features and configuration options, please refer to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | PY support | Downloads | Version |
---|---|---|---|---|---|---|
VertexAI | @langchain/google-vertexai | ❌ | ✅ | ✅ |
Setup
LangChain.js supports two different authentication methods based on whether you’re running in a Node.js environment or a web environment. To access VertexAI models you’ll need to create a Google Cloud Platform (GCP) account, get an API key, and install the@langchain/google-vertexai
integration package.
Credentials
Node.js
You should make sure the Vertex AI API is enabled for the relevant project and that you’ve authenticated to Google Cloud using one of these methods:- You are logged into an account (using
gcloud auth application-default login
) permitted to that project. - You are running on a machine using a service account that is permitted to the project.
- You have downloaded the credentials for a service account that is permitted
to the project and set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable to the path of this file. or - You set the
GOOGLE_API_KEY
environment variable to the API key for the project.
Web
To call Vertex AI models in web environments (like Edge functions), you’ll need to install the@langchain/google-vertexai-web
package.
Then, you’ll need to add your service account credentials directly as a GOOGLE_VERTEX_AI_WEB_CREDENTIALS
environment variable:
Installation
The LangChain VertexAI integration lives in the@langchain/google-vertexai
package: