Skip to main content

Ollama Integration with Orkes Conductor

note

You can use this integration in the following scenarios:

  • Conductor is running locally, and Ollama is running locally.
  • Conductor is running in the cloud, and your AI models are hosted on a server accessible to Ollama.

To use system AI tasks in Orkes Conductor, you must integrate your Conductor cluster with the necessary AI/LLM providers. This guide explains how to integrate Ollama with Orkes Conductor. Here’s an overview:

  1. Set up the Ollama app locally.
  2. Configure a new Ollama integration in Orkes Conductor.
  3. Add models to the integration.
  4. Set access limits to the AI model to govern which applications or groups can use them.

Step 1: Set up the Ollama app locally

To integrate Ollama with Orkes Conductor, first download and run Ollama locally on your device.

To run Ollama locally:

  1. Download and install the Ollama app.
  2. Open the app on your device.
  3. Choose the model you want to run from the list of supported Ollama models. You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
  4. Pull the model locally with the following command.
ollama pull <MODEL-NAME> 

Example:

ollama pull mistral

This downloads the model to your device.

  1. Once downloaded, run the model using the command:
ollama run <MODEL-NAME> 
  1. Enter prompts in the terminal to verify that the model runs locally.

The default local API endpoint for Ollama is http://localhost:11434. Open this URL in a browser to confirm that it displays “Ollama is running.”

note

If your Conductor cluster is running in the cloud, you must host Ollama on a server accessible to that cloud environment. The server must allow network access from the Conductor cluster to the Ollama API endpoint.

Step 2: Add an integration for Ollama

After running Ollama locally, add an Ollama integration to your Conductor cluster.

To create an Ollama integration:

  1. Go to Integrations from the left navigation menu on your Conductor cluster.
  2. Select + New integration.
  3. In the AI/LLM section, choose Ollama.
  4. Select + Add and enter the following parameters:
ParametersDescription
Integration nameA name for the integration.
API EndpointAPI server where Ollama is running, typically http://localhost:11434.
DescriptionA description of the integration.

Ollama Integration with Orkes Conductor

  1. (Optional) Toggle the Active button off if you don’t want to activate the integration instantly.
  2. Select Save.

Step 3: Add Ollama models

Once you’ve integrated Ollama, the next step is to configure specific models. Add the model that you have set up on your local device.

To add a model to the Ollama integration:

  1. Go to the Integrations and select the + button next to the integration created.

Create new model for Ollama Integration

  1. Select + New model.
  2. Enter the Model name and a Description. Add the model that you have set up on your local device.

Creating new model for Ollama Integration

  1. (Optional) Toggle the Active button off if you don’t want to activate the model instantly.
  2. Select Save.

This saves the model for future use in AI tasks within Orkes Conductor.

Step 4: Set access limits to integration

Once the integration is configured, set access controls to manage which applications or groups can use the models.

To provide access to an application or group:

  1. Go to Access Control > Applications or Groups from the left navigation menu on your Conductor cluster.
  2. Create a new group/application or select an existing one.
  3. In the Permissions section, select + Add Permission.
  4. In the Integration tab, select the required AI models and toggle the necessary permissions.
  5. Select Add Permissions.

Add Permissions for Ollama Integration

The group or application can now access the AI model according to the configured permissions.

With the integration in place, you can now create workflows using AI/LLM tasks.

More resources