Table of Contents

Share on:

Share on LinkedInShare on FacebookShare on Twitter
Playground Orkes

Ready to build reliable applications 10x faster?

AGENTIC
ENGINEERING

Build a Tiny, Useful AI Agent in Orkes Conductor

Maria Shimkovska
Content Engineer
September 10, 2025
6 min read

A hands-on way to see how a simple AI agent actually works in Orkes Conductor.


In Orkes Conductor you can build a simple AI Agent using 4 tasks: Do While, LLM Chat Complete, HTTP, and Set Variable.
In Orkes Conductor you can build a simple AI Agent using four tasks: Do While, LLM Chat Complete, HTTP, and Set Variable.
TL;DR
  • An AI Agent = Looping over an LLM + Memory + Tool.
  • Return strict JSON from the LLM and branch on it.
  • Use the SET_VARIABLE task for short-lived memory; no DB needed.

Whenever I’m trying to wrap my head around something technical, I like to strip it down to the simplest version that actually works. Then build up from there.

In this article, I’ll show you a tiny AI agent built with Orkes Conductor. It can make sense of a user’s query, call a tool when it needs extra info, remember what happened along the way, and pull everything together into a final answer.

Keep in mind, this is just a starting point. Use it, find its weaknesses, tweak it, experiment with it, and plug it into your own workflows. The path to incredible agents starts with the smallest working example.

What makes up a basic AI agent?

There’s a lot of noise about what an “agent” is. Here’s the minimal stack most folks agree on (right now):

  1. LLM/SLM: the brain. No AI agent without AI.
  2. Memory: remembers prior steps so it can reason.
  3. Tools: reaches out to external systems when the model alone isn’t enough.

These run in a loop so the agent can try → check → improve. If something doesn’t fully work, it goes back and finds another way.

AI agent building blocks in Orkes Conductor

We’ll use native tasks to assemble the loop:

  1. LLM: LLM_CHAT_COMPLETE drives the agent’s “think/decide” step.
  2. Memory: SET_VARIABLE keeps state in memory for simplicity.
  3. Tools: HTTP calls an external API when the agent decides it needs help.
  4. Loop: DO_WHILE repeats reasoning until the goal is met or a cap is hit.
Basic AI Agent workflow in Orkes Conductor Developer Edition
Basic AI Agent workflow in Orkes Conductor Developer Edition

How the SimpleAIAgent Works

A single mini-loop that:

  1. Reads your query.
  2. Tries to answer directly.
  3. If it's confused about a term, it looks it up with DuckDuckGos Instant Answer API.
  4. Stores the result in memory.
  5. On the next pass, it uses that tool result to produce a concise final answer.
  6. Stops once it has an answer, or after a few tries.

Design goals:

  • Safe: Hard loop cap and an explicit, short list of allowed tools.
  • Explainable: Memory tracks last action/tool/result.
  • Reusable: Swap in new tools (SQL, calculator, weather, etc.) without changing the structure.

Step 1 — Create the AI Prompt

This is where a big part of the magic happens. So take time to think through your prompts. Remember the old saying, "garbage in, garbage out".

Create a new AI Prompt in Conductor. Set your name, model, description. For the prompt you can use the following:

Good to Note:

For the following example I ask the agent: What is Orkes Conductor?

agent_instructions
You are a tiny AI agent inside Orkes Conductor.

Tools you can use:
- "instant_answer": DuckDuckGo Instant Answer API. It may return:
  - Direct fields: Answer, AbstractText, Definition
  - Or disambiguation: Type="D" with RelatedTopics (array of items and/or grouped {Name, Topics})

You must not invent or assume any other tools.

Context
- Query: ${query}
- Iteration: ${iteration}
- Last action: ${last_action}
- Last tool: ${last_tool_name}
- Last tool result (JSON or text): ${last_tool_result}

Conversation (latest last):
${messages}

Decision rules:
1) First, try to directly answer the user’s query.
   - If you can confidently answer without external help, return:
     {"action":"FINAL","answer":"..."}.

2) If you are uncertain, confused about a word/phrase, or the query is unfamiliar/compound:
   - Try the tool once:
     {"action":"TOOL","tool":"instant_answer","input":"<best concise lookup term or phrase>"}.

3) If iteration >= 2 OR last_action == "TOOL":
   - Do NOT call tools again. Use the available context/tool result and return:
     {"action":"FINAL","answer":"..."}.

Rules:
- Prefer Answer/AbstractText/Definition fields. If Type == "D" (disambiguation):
  • Pick the RelatedTopics item that best matches the clarified term and domain.
  • If no clear match, write a concise answer that says the term is ambiguous and ask the user to clarify.
- If last_tool_result already contains the needed info, choose FINAL and use it.
- Never chain multiple tools; at most one TOOL call per query.
- Keep answers concise.

Output one valid JSON object (no extra text):
{
  "action": "TOOL" | "FINAL",
  "tool": "instant_answer" | "",
  "input": "string",           // only when action == TOOL
  "answer": "string"           // only when action == FINAL
}

{
  "result": {
    "action": "FINAL",
    "answer": "Orkes Conductor is a platform designed for managing and orchestrating microservices and workflows, making it easier to deploy and scale applications in cloud environments."
  },
  "finishReason": "STOP",
  "tokenUsed": 517,
  "promptTokens": 479,
  "completionTokens": 38
}

You can use prompt variables like ${query} so the same instructions work for any input.

Step 2 — Add the tool (HTTP task)

We’ll use DuckDuckGo’s Instant Answer API to resolve simple definitions/terms. The request:

GET https://api.duckduckgo.com/?q=${term}&format=json&no_redirect=1&no_html=1

Step 3 — Wire the loop (LLM → SWITCH → Tool → Memory → Loop)

Once the prompt and tool are ready, the next step is to wire them together into a reasoning loop.

The workflow begins with the LLM_CHAT_COMPLETE task, which interprets the user’s query and decides whether to provide a final answer or request a tool.

A SWITCH task then routes the flow:

  • if the action is FINAL, the workflow stops with the answer
  • if it is TOOL, the agent calls the HTTP task to fetch data from DuckDuckGo, stores the result in memory with SET_VARIABLE, and then loops back into the reasoning cycle.

A DO_WHILE operator manages this loop, allowing the agent to repeat the process until it produces a confident final answer or hits the safety cap on iterations.

Two tiny examples

Using these easy queries:

  • “What is 2 + 2?” → LLM returns FINAL without calling a tool.
  • “What is an AI agent?” → LLM likely routes to TOOL to get some help defining the term, before routing back to FINAL.
Basic AI Agent workflow in Orkes Conductor Developer Edition
Basic AI Agent workflow in Orkes Conductor Developer Edition

From “tiny” to “any” agent: extend the pattern

The same pattern (LLM → SWITCH → Tool → Memory) lets you assemble almost any agent:

  • Research / Retrieval agent: add a wiki tool, a search tool, and a vector-DB lookup. Route by tool_name.
  • Ops/RPA agent: swap HTTP for APIs you already use (Zendesk, Jira, Slack, Stripe). Wrap side-effects in their own branches.
  • Analytics agent: add a SQL tool (HTTP to your gateway), then have the LLM generate queries and summarize results.
  • Support triage agent: classify (FINAL), or fetch FAQs/KB (TOOL), then answer; escalate with a HUMAN task if confidence is low.
  • Weather/geo agent: another tool (e.g., weather_api) and the same DO_WHILE loop.

Wrap-up

You now have a minimal, working agent in Orkes Conductor: an LLM that can think, decide to use a tool, remember what happened, and converge on a final answer safely. Start tiny, then iterate your way to powerful, specialized agents.

Related Blogs

Connecting Next.js Applications to Orkes Conductor Using the JS SDK

Sep 10, 2025

Connecting Next.js Applications to Orkes Conductor Using the JS SDK