AI Orchestration
Orkes Conductor provides features to build and orchestrate AI-powered applications using LLMs and vector databases. From simple LLM orchestration tasks to complex agentic AI orchestration where decisions are made dynamically based on model output, you can design, govern, and run AI workflows at scale.
Key features include:
- AI Tasks: Use predefined system tasks to generate text, create embeddings, and retrieve results from vector databases.
- AI/LLM and Vector Database Integrations: Connect to multiple AI models and vector databases in a secure, governed way.
- AI Prompt Studio: Create, refine, test, and govern prompt templates for AI models.
You can use these features to build:
- LLM orchestration pipelines
- Agentic AI workflows
- RAG (retrieval augmented generation) systems
- LLM-powered chatbots
AI and LLM tasks
Orkes Conductor provides a variety of AI tasks that can execute common logic without the need to write code. Depending on the task type, these tasks may require an AI/LLM integration, a vector database integration, or an AI prompt.
| AI Task | Description | Prerequisites |
|---|---|---|
| LLM Text Complete | Generate text from an LLM based on a defined prompt. |
|
| LLM Generate Embeddings | Generate text embeddings. |
|
| LLM Store Embeddings | Store text embeddings in a vector database. |
|
| LLM Get Embeddings | Retrieve data from a vector database. |
|
| LLM Index Document | Chunk, generate, and store text embeddings in a vector database. |
|
| LLM Get Document | Retrieve text or JSON content from a URL. | NA |
| LLM Index Text | Generate and store text embeddings in a vector database. |
|
| LLM Search Index | Retrieve data from a vector database based on a search query. |
|
| LLM Chat Complete | Generate text from an LLM based on a user query and additional system/assistant instructions. |
|
| Chunk Text | Divide text into smaller segments (chunks) based on the document type. | NA |
| List Files | Retrieve files from a specific storage location. |
|
| Parse Document | Retrieves, parses, and chunk documents from various storage locations. |
|
AI/LLM and vector database integrations
Orkes Conductor integrates with the following AI/LLM providers:
- Ollama
- Azure + OpenAI
- OpenAI
- Perplexity
- Grok
- Cohere
- Mistral
- Anthropic Claude
- Google Vertex AI
- Google Gemini AI
- Hugging Face
- AWS Bedrock Anthropic
- AWS Bedrock Cohere
- AWS Bedrock Titan
For vector databases, supported providers include:
Each integration is configured at the cluster level with provider credentials and access to models/indexes. Once configured, the integration and its models are available to reference in AI tasks within the workflows, but only for applications or groups that have been explicitly granted access via RBAC.
AI Prompt Studio
Orkes Conductor includes a dedicated Prompt Studio for creating, testing, and refining prompt templates. Prompts created here are reusable across any workflow that contains an LLM Text Complete or LLM Chat Complete task.
Prompts support passing dynamic variables using ${variable_name} syntax. At runtime, these variables are resolved from workflow inputs or the outputs of upstream tasks, allowing a single prompt template to serve different contexts without modification.
Learn more
📄️ Using AI Models or LLMs
Learn how to integrate AI models and use LLM system tasks in workflows, including configuring models, prompt templates, and access control.
📄️ Using Vector Databases
Learn how to integrate vector databases and use them with AI tasks to store and retrieve embeddings in workflows.
📄️ Using AI Prompts
Learn how to create and manage prompt templates with variables and reuse them in AI tasks across workflows.