Integrating with Google Vertex AI in Orkes Conductor
To effectively utilize AI and LLM tasks in Orkes Conductor, it's essential to integrate your Orkes Conductor cluster with the necessary AI and LLM models.
Google Vertex AI offers a range of models that can be incorporated into the Orkes Conductor cluster. The choice of model depends on your unique use case, the functionalities you require, and the specific natural language processing tasks you intend to tackle.
This guide will provide the steps for integrating the Google Vertex AI provider with Orkes Conductor.
Steps to integrate with Google Vertex AI
Before beginning the integration process in Orkes Conductor, you must obtain specific configuration credentials, such as project ID and Service Account JSON, from the GCP console.
To get the project ID:
- Login to Google Cloud Console and create a project.
- If you have multiple projects, click the drop-down menu on the top left of the console to select your desired project.
- The Project ID will be displayed on the dashboard below the project name.
Refer to the official documentation on creating and managing projects in GCP for more details.
To get the Service Account JSON:
- From the left menu, navigate to the IAM & Admin section.
- Select Service Accounts from the left menu.
- Click on an existing service account you want to use or create a new one.
- Under the Keys sub-tab, click Add Key.
- Choose the option Create new key.
- Choose the key type as JSON and click Create to generate the JSON key.
Integrating with Google Vertex AI as a model provider
Let’s integrate Google Vertex AI with Orkes Conductor.
- Navigate to Integrations from the left menu on your Orkes Conductor cluster.
- Click +New integration button from the top-right corner.
- Under the AI/LLM section, choose Google Vertex AI.
- Click +Add and provide the following parameters:
Parameters | Description |
---|---|
Integration name | A name for the integration. |
Project ID | The project ID in GCP. |
Location | The Google Cloud region of your GCP account. |
Publisher | The publisher name in GCP. |
Service Account JSON | Upload the Service Account JSON file, which is a key file containing the credentials for authenticating the Orkes Conductor cluster with the GCP services. Refer to the previous section on how to generate the service account JSON. |
Description | A description of your integration. |
- You can toggle-on the Active button to activate the integration instantly.
- Click Save.
Adding Google Vertex AI models to the integration
Now, you have integrated your Orkes Conductor cluster with the Google Vertex AI provider. The next step is integrating with the specific Vertex AI models.
Google Vertex AI has different models, such as Bison, Gecko, etc. Each model is to be used for different use cases, such as text completion, generating embeddings, etc.
Depending on your use case, you must configure different models within your Google Vertex AI configuration.
To add a new model to the Google Vertex AI integration:
- Navigate to the integrations page and click the '+' button next to the integration created.
- Click +New model.
- Provide the model name and an optional description for the model. The complete list of models in Google Vertex AI is available here.
- Toggle-on the Active button to enable the model immediately.
- Click Save.
This ensures the integration model is saved for future use in LLM tasks within Orkes Conductor.
RBAC - Governance on who can use Integrations
Now, the integration with the required models is ready. Next, we should determine the access control to these models.
The permission can be granted to applications/groups within the Orkes Conductor cluster.
To provide explicit permission to Groups:
- Navigate to Access Control > Groups from the left menu on your Orkes Conductor cluster.
- Create a new group or choose an existing group.
- Under the Permissions section, click +Add Permission.
- Under the Integrations tab, select the required integrations with the required permissions.
- Click Add Permissions. This ensures that all the group members can access these integration models in their workflows.
Similarly, you can also provide permissions to applications.
Once the integration is ready, start creating workflows with LLM tasks.