Orkes logo image
Product
Platform
Orkes Platform thumbnail
Orkes Platform
Orkes Agentic Workflows
Orkes Conductor Vs Conductor OSS thumbnail
Orkes vs. Conductor OSS
Orkes Cloud
How Orkes Powers Boat Thumbnail
How Orkes Powers BOAT
Try enterprise Orkes Cloud for free
Enjoy a free 14-day trial with all enterprise features
Start for free
Capabilities
Microservices Workflow Orchestration icon
Microservices Workflow Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Realtime API Orchestration icon
Realtime API Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Event Driven Architecture icon
Event Driven Architecture
Create durable workflows that promote modularity, flexibility, and responsiveness.
Human Workflow Orchestration icon
Human Workflow Orchestration
Seamlessly insert humans in the loop of complex workflows.
Process orchestration icon
Process Orchestration
Visualize end-to-end business processes, connect people, processes and systems, and monitor performance to resolve issues in real-time
Use Cases
By Industry
Financial Services icon
Financial Services
Secure and comprehensive workflow orchestration for financial services
Media and Entertainment icon
Media and Entertainment
Enterprise grade workflow orchestration for your media pipelines
Telecommunications icon
Telecommunications
Future proof your workflow management with workflow orchestration
Healthcare icon
Healthcare
Revolutionize and expedite patient care with workflow orchestration for healthcare
Shipping and logistics icon
Shipping and Logistics
Reinforce your inventory management with durable execution and long running workflows
Software icon
Software
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean leo mauris, laoreet interdum sodales a, mollis nec enim.
Docs
Developers
Learn
Blog
Explore our blog for insights into the latest trends in workflow orchestration, real-world use cases, and updates on how our solutions are transforming industries.
Read blogs
Check out our latest blog:
Conductor CLI Guide: Register, Run, Retry, and Recover Durable Workflows Without Leaving Your Terminal đź’»
Customers
Discover how leading companies are using Orkes to accelerate development, streamline operations, and achieve remarkable results.
Read case studies
Our latest case study:
Twilio Case Study Thumbnail
Orkes Academy New!
Master workflow orchestration with hands-on labs, structured learning paths, and certification. Build production-ready workflows from fundamentals to Agentic AI.
Explore courses
Featured course:
Orkes Academy Thumbnail
Events icon
Events
Videos icons
Videos
In the news icon
In the News
Whitepapers icon
Whitepapers
About us icon
About Us
Pricing
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Orkes logo image

Company

Platform
Careers
HIRING!
Partners
About Us
Legal Hub
Security

Product

Cloud
Platform
Support

Community

Docs
Blogs
Events

Use Cases

Microservices Workflow Orchestration
Realtime API Orchestration
Event Driven Architecture
Agentic Workflows
Human Workflow Orchestration
Process Orchestration

Compare

Orkes vs Camunda
Orkes vs BPMN
Orkes vs LangChain
Orkes vs Temporal
Twitter or X Socials linkLinkedIn Socials linkYouTube Socials linkSlack Socials linkGithub Socials linkFacebook iconInstagram iconTik Tok icon
© 2026 Orkes. All Rights Reserved.
Back to Blogs

Table of Contents

Share on:Share on LinkedInShare on FacebookShare on Twitter
Worker Code Illustration

Get Started for Free with Dev Edition

Signup
Back to Blogs
AGENTIC ENGINEERING

Build a Tiny, Useful AI Agent in Orkes Conductor

Maria Shimkovska
Maria Shimkovska
Content Engineer
Last updated: September 10, 2025
September 10, 2025
6 min read

Related Blogs

Build an AI-Powered Loan Risk Assessment Workflow (with Sub-Workflows + Human Review)

Dec 4, 2025

Build an AI-Powered Loan Risk Assessment Workflow (with Sub-Workflows + Human Review)

Enterprise Uptime Guardrails: Build a Website Health Checker Workflow (HTTP Checks + Inline Logic + SMS Alerts)

Dec 2, 2025

Enterprise Uptime Guardrails: Build a Website Health Checker Workflow (HTTP Checks + Inline Logic + SMS Alerts)

Vector Databases 101: A Simple Guide for Building AI Apps with Conductor

Nov 26, 2025

Vector Databases 101: A Simple Guide for Building AI Apps with Conductor

Ready to Build Something Amazing?

Join thousands of developers building the future with Orkes.

Start for free

A hands-on way to see how a simple AI agent actually works in Orkes Conductor.


In Orkes Conductor you can build a simple AI Agent using 4 tasks: Do While, LLM Chat Complete, HTTP, and Set Variable.

In Orkes Conductor you can build a simple AI Agent using four tasks: Do While, LLM Chat Complete, HTTP, and Set Variable.
TL;DR
  • An AI Agent = Looping over an LLM + Memory + Tool.
  • Return strict JSON from the LLM and branch on it.
  • Use the SET_VARIABLE task for short-lived memory; no DB needed.

Whenever I’m trying to wrap my head around something technical, I like to strip it down to the simplest version that actually works. Then build up from there.

In this article, I’ll show you a tiny AI agent built with Orkes Conductor. It can make sense of a user’s query, call a tool when it needs extra info, remember what happened along the way, and pull everything together into a final answer.

Keep in mind, this is just a starting point. Use it, find its weaknesses, tweak it, experiment with it, and plug it into your own workflows. The path to incredible agents starts with the smallest working example.

What makes up a basic AI agent?

There’s a lot of noise about what an “agent” is. Here’s the minimal stack most folks agree on (right now):

  1. LLM/SLM: the brain. No AI agent without AI.
  2. Memory: remembers prior steps so it can reason.
  3. Tools: reaches out to external systems when the model alone isn’t enough.

These run in a loop so the agent can try → check → improve. If something doesn’t fully work, it goes back and finds another way.

AI agent building blocks in Orkes Conductor

We’ll use native tasks to assemble the loop:

  1. LLM: LLM_CHAT_COMPLETE drives the agent’s “think/decide” step.
  2. Memory: SET_VARIABLE keeps state in memory for simplicity.
  3. Tools: HTTP calls an external API when the agent decides it needs help.
  4. Loop: DO_WHILE repeats reasoning until the goal is met or a cap is hit.

Basic AI Agent workflow in Orkes Conductor Developer Edition

Basic AI Agent workflow in Orkes Conductor Developer Edition

How the SimpleAIAgent Works

A single mini-loop that:

  1. Reads your query.
  2. Tries to answer directly.
  3. If it's confused about a term, it looks it up with DuckDuckGos Instant Answer API.
  4. Stores the result in memory.
  5. On the next pass, it uses that tool result to produce a concise final answer.
  6. Stops once it has an answer, or after a few tries.

Design goals:

  • Safe: Hard loop cap and an explicit, short list of allowed tools.
  • Explainable: Memory tracks last action/tool/result.
  • Reusable: Swap in new tools (SQL, calculator, weather, etc.) without changing the structure.

Step 1 — Create the AI Prompt

This is where a big part of the magic happens. So take time to think through your prompts. Remember the old saying, "garbage in, garbage out".

Create a new AI Prompt in Conductor. Set your name, model, description. For the prompt you can use the following:

Good to Note:

For the following example I ask the agent: What is Orkes Conductor?

agent_instructions
Input
You are a tiny AI agent inside Orkes Conductor.

Tools you can use:

- "instant_answer": DuckDuckGo Instant Answer API. It may return:
- Direct fields: Answer, AbstractText, Definition
- Or disambiguation: Type="D" with RelatedTopics (array of items and/or grouped {Name, Topics})

You must not invent or assume any other tools.

Context

- Query: ${query}
- Iteration: ${iteration}
- Last action: ${last_action}
- Last tool: ${last_tool_name}
- Last tool result (JSON or text): ${last_tool_result}

Conversation (latest last):
${messages}

Decision rules:

1. First, try to directly answer the user’s query.

 - If you can confidently answer without external help, return:
   {"action":"FINAL","answer":"..."}.

2. If you are uncertain, confused about a word/phrase, or the query is unfamiliar/compound:

 - Try the tool once:
   {"action":"TOOL","tool":"instant_answer","input":"<best concise lookup term or phrase>"}.

3. If iteration >= 2 OR last_action == "TOOL":
 - Do NOT call tools again. Use the available context/tool result and return:
   {"action":"FINAL","answer":"..."}.

Rules:

- Prefer Answer/AbstractText/Definition fields. If Type == "D" (disambiguation):
• Pick the RelatedTopics item that best matches the clarified term and domain.
• If no clear match, write a concise answer that says the term is ambiguous and ask the user to clarify.
- If last_tool_result already contains the needed info, choose FINAL and use it.
- Never chain multiple tools; at most one TOOL call per query.
- Keep answers concise.

Output one valid JSON object (no extra text):
{
"action": "TOOL" | "FINAL",
"tool": "instant_answer" | "",
"input": "string", // only when action == TOOL
"answer": "string" // only when action == FINAL
}

Output
{
"result": {
"action": "FINAL",
"answer": "Orkes Conductor is a platform designed for managing and orchestrating microservices and workflows, making it easier to deploy and scale applications in cloud environments."
},
"finishReason": "STOP",
"tokenUsed": 517,
"promptTokens": 479,
"completionTokens": 38
}

You can use prompt variables like ${query} so the same instructions work for any input.

Step 2 — Add the tool (HTTP task)

We’ll use DuckDuckGo’s Instant Answer API to resolve simple definitions/terms. The request:

bash
GET https://api.duckduckgo.com/?q=${term}&format=json&no_redirect=1&no_html=1

Step 3 — Wire the loop (LLM → SWITCH → Tool → Memory → Loop)

Once the prompt and tool are ready, the next step is to wire them together into a reasoning loop.

The workflow begins with the LLM_CHAT_COMPLETE task, which interprets the user’s query and decides whether to provide a final answer or request a tool.

A SWITCH task then routes the flow:

  • if the action is FINAL, the workflow stops with the answer
  • if it is TOOL, the agent calls the HTTP task to fetch data from DuckDuckGo, stores the result in memory with SET_VARIABLE, and then loops back into the reasoning cycle.

A DO_WHILE operator manages this loop, allowing the agent to repeat the process until it produces a confident final answer or hits the safety cap on iterations.

Two tiny examples

Using these easy queries:

  • “What is 2 + 2?” → LLM returns FINAL without calling a tool.
  • “What is an AI agent?” → LLM likely routes to TOOL to get some help defining the term, before routing back to FINAL.

Basic AI Agent workflow in Orkes Conductor Developer Edition

Basic AI Agent workflow in Orkes Conductor Developer Edition

From “tiny” to “any” agent: extend the pattern

The same pattern (LLM → SWITCH → Tool → Memory) lets you assemble almost any agent:

  • Research / Retrieval agent: add a wiki tool, a search tool, and a vector-DB lookup. Route by tool_name.
  • Ops/RPA agent: swap HTTP for APIs you already use (Zendesk, Jira, Slack, Stripe). Wrap side-effects in their own branches.
  • Analytics agent: add a SQL tool (HTTP to your gateway), then have the LLM generate queries and summarize results.
  • Support triage agent: classify (FINAL), or fetch FAQs/KB (TOOL), then answer; escalate with a HUMAN task if confidence is low.
  • Weather/geo agent: another tool (e.g., weather_api) and the same DO_WHILE loop.

Wrap-up

You now have a minimal, working agent in Orkes Conductor: an LLM that can think, decide to use a tool, remember what happened, and converge on a final answer safely. Start tiny, then iterate your way to powerful, specialized agents.