Integrating with Google Gemini AI in Orkes Conductor
To effectively utilize AI and LLM tasks in Orkes Conductor, it's essential to integrate your Conductor cluster with the necessary AI and LLM models.
Google Gemini AI offers a range of models that can be incorporated into the Orkes Conductor cluster. The choice of model depends on your unique use case, the functionalities you require, and the specific natural language processing tasks you intend to tackle.
This guide will provide the steps for integrating the Google Gemini AI provider with Orkes Conductor.
Steps to integrate with Google Gemini AI
Before beginning the integration process in Orkes Conductor, you must obtain specific configuration credentials, such as project ID and Service Account JSON, from the GCP console.
To get the project ID:
- Login to Google Cloud Console and create a project.
- If you have multiple projects, click the drop-down menu on the top left of the console to select your desired project.
- The Project ID will be displayed on the dashboard below the project name.
Refer to the official documentation on creating and managing projects in GCP for more details.
To get the Service Account JSON:
- From the left menu, navigate to the IAM & Admin section.
- Select Service Accounts from the left menu.
- Click on an existing service account you want to use or create a new one.
- Under the Keys sub-tab, click Add Key.
- Choose the option Create new key.
- Choose the key type as JSON and click Create to generate the JSON key.
Integrating with Google Gemini AI as a model provider
Let’s integrate Google Gemini AI with Orkes Conductor.
- Navigate to Integrations from the left menu on your Orkes Conductor cluster.
- Click +New integration button from the top-right corner.
- Under the AI/LLM section, choose Google Gemini AI.
- Click +Add and provide the following parameters:
Parameter | Description |
---|---|
Integration name | A name for the integration. |
Project ID | The GCP project ID. Refer to the previous section on how to get this. |
Location | Enter the location of your GCP account (e.g., us-central1). |
Service Account JSON | Upload the Service Account JSON file, which is a key file containing the credentials for authenticating the Orkes Conductor cluster with the GCP services. Refer to the previous section on how to generate the service account JSON. |
Description | A description of your integration. |
- You can toggle-on the Active button to activate the integration instantly.
- Click Save.
Adding Google Gemini AI models to integration
You have now integrated your Orkes Conductor cluster with the Google Gemini AI provider. The next step is to integrate with the specific models. Gemini has different models: Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 1.0 Pro, and more. Each model is intended for different use cases, such as text completion and generating embeddings.
Depending on your use case, you must configure the required model within your Google Gemini configuration.
To add a new model to the Google Gemini AI integration:
- Navigate to the integrations page and click the '+' button next to the integration created.
- Click +New model.
- Provide the model name and an optional description. The complete list of models in Google Gemini AI is available here.
- Toggle-on the Active button to enable the model immediately.
- Click Save.
This ensures the integration model is saved for future use in LLM tasks within Orkes Conductor.
RBAC - Governance on who can use Integrations
The integration with the required models is now ready. Next, we should determine the access control to these models.
The permission can be granted to applications/groups within the Orkes Conductor cluster.
To provide explicit permission to Groups:
- Navigate to Access Control > Groups from the left menu on your Orkes Conductor cluster.
- Create a new group or choose an existing group.
- Under the Permissions section, click +Add Permission.
- Under the Integrations tab, select the required integrations with the required permissions.
- Click Add Permissions. This ensures that all the group members can access these integration models in their workflows.
Similarly, you can also provide permissions to applications.
Once the integration is ready, start creating workflows with LLM tasks.