Skip to main content

AI Orchestration

Orkes Conductor provides features to build and orchestrate AI-powered applications using LLMs and vector databases. From simple LLM orchestration tasks to complex agentic AI orchestration where decisions are made dynamically based on model output, you can design, govern, and run AI workflows at scale.

Key features include:

  • AI Tasks: Use predefined system tasks to generate text, create embeddings, and retrieve results from vector databases.
  • AI/LLM and Vector Database Integrations: Connect to multiple AI models and vector databases in a secure, governed way.
  • AI Prompt Studio: Create, refine, test, and govern prompt templates for AI models.

You can use these features to build:

  • LLM orchestration pipelines
  • Agentic AI workflows
  • RAG (retrieval augmented generation) systems
  • LLM-powered chatbots

AI and LLM tasks

Orkes Conductor provides a variety of AI tasks that can execute common logic without the need to write code. Depending on the task type, these tasks may require an AI/LLM integration, a vector database integration, or an AI prompt.

AI TaskDescriptionPrerequisites
LLM Text CompleteGenerate text from an LLM based on a defined prompt.
  • Integrate an AI model
  • Create an AI prompt
LLM Generate EmbeddingsGenerate text embeddings.
  • Integrate an AI model
LLM Store EmbeddingsStore text embeddings in a vector database.
  • Integrate an AI model
  • Integrate a vector database
LLM Get EmbeddingsRetrieve data from a vector database.
  • Integrate a vector database
LLM Index DocumentChunk, generate, and store text embeddings in a vector database.
  • Integrate an AI model
  • Integrate a vector database
LLM Get DocumentRetrieve text or JSON content from a URL.NA
LLM Index TextGenerate and store text embeddings in a vector database.
  • Integrate an AI model
  • Integrate a vector database
LLM Search IndexRetrieve data from a vector database based on a search query.
  • Integrate an AI model
  • Integrate a vector database
LLM Chat CompleteGenerate text from an LLM based on a user query and additional system/assistant instructions.
  • Integrate an AI model
  • Create an AI prompt (Optional)
Chunk TextDivide text into smaller segments (chunks) based on the document type.NA
List FilesRetrieve files from a specific storage location.
  • Integrate cloud providers for private files
Parse DocumentRetrieves, parses, and chunk documents from various storage locations.
  • Integrate cloud providers for private files

AI/LLM and vector database integrations

Orkes Conductor integrates with the following AI/LLM providers:

For vector databases, supported providers include:

Each integration is configured at the cluster level with provider credentials and access to models/indexes. Once configured, the integration and its models are available to reference in AI tasks within the workflows, but only for applications or groups that have been explicitly granted access via RBAC.

AI Prompt Studio

Orkes Conductor includes a dedicated Prompt Studio for creating, testing, and refining prompt templates. Prompts created here are reusable across any workflow that contains an LLM Text Complete or LLM Chat Complete task.

Prompts support passing dynamic variables using ${variable_name} syntax. At runtime, these variables are resolved from workflow inputs or the outputs of upstream tasks, allowing a single prompt template to serve different contexts without modification.

Learn more