A hands-on way to see how a simple AI agent actually works in Orkes Conductor.
Whenever I’m trying to wrap my head around something technical, I like to strip it down to the simplest version that actually works. Then build up from there.
In this article, I’ll show you a tiny AI agent built with Orkes Conductor. It can make sense of a user’s query, call a tool when it needs extra info, remember what happened along the way, and pull everything together into a final answer.
Keep in mind, this is just a starting point. Use it, find its weaknesses, tweak it, experiment with it, and plug it into your own workflows. The path to incredible agents starts with the smallest working example.
There’s a lot of noise about what an “agent” is. Here’s the minimal stack most folks agree on (right now):
These run in a loop so the agent can try → check → improve. If something doesn’t fully work, it goes back and finds another way.
We’ll use native tasks to assemble the loop:
A single mini-loop that:
Design goals:
This is where a big part of the magic happens. So take time to think through your prompts. Remember the old saying, "garbage in, garbage out".
Create a new AI Prompt in Conductor. Set your name, model, description. For the prompt you can use the following:
For the following example I ask the agent: What is Orkes Conductor?
You are a tiny AI agent inside Orkes Conductor. Tools you can use: - "instant_answer": DuckDuckGo Instant Answer API. It may return: - Direct fields: Answer, AbstractText, Definition - Or disambiguation: Type="D" with RelatedTopics (array of items and/or grouped {Name, Topics}) You must not invent or assume any other tools. Context - Query: ${query} - Iteration: ${iteration} - Last action: ${last_action} - Last tool: ${last_tool_name} - Last tool result (JSON or text): ${last_tool_result} Conversation (latest last): ${messages} Decision rules: 1) First, try to directly answer the user’s query. - If you can confidently answer without external help, return: {"action":"FINAL","answer":"..."}. 2) If you are uncertain, confused about a word/phrase, or the query is unfamiliar/compound: - Try the tool once: {"action":"TOOL","tool":"instant_answer","input":"<best concise lookup term or phrase>"}. 3) If iteration >= 2 OR last_action == "TOOL": - Do NOT call tools again. Use the available context/tool result and return: {"action":"FINAL","answer":"..."}. Rules: - Prefer Answer/AbstractText/Definition fields. If Type == "D" (disambiguation): • Pick the RelatedTopics item that best matches the clarified term and domain. • If no clear match, write a concise answer that says the term is ambiguous and ask the user to clarify. - If last_tool_result already contains the needed info, choose FINAL and use it. - Never chain multiple tools; at most one TOOL call per query. - Keep answers concise. Output one valid JSON object (no extra text): { "action": "TOOL" | "FINAL", "tool": "instant_answer" | "", "input": "string", // only when action == TOOL "answer": "string" // only when action == FINAL }
{ "result": { "action": "FINAL", "answer": "Orkes Conductor is a platform designed for managing and orchestrating microservices and workflows, making it easier to deploy and scale applications in cloud environments." }, "finishReason": "STOP", "tokenUsed": 517, "promptTokens": 479, "completionTokens": 38 }
You can use prompt variables like
${query}
so the same instructions work for any input.
We’ll use DuckDuckGo’s Instant Answer API to resolve simple definitions/terms. The request:
GET https://api.duckduckgo.com/?q=${term}&format=json&no_redirect=1&no_html=1
Once the prompt and tool are ready, the next step is to wire them together into a reasoning loop.
The workflow begins with the LLM_CHAT_COMPLETE task, which interprets the user’s query and decides whether to provide a final answer or request a tool.
A SWITCH task then routes the flow:
A DO_WHILE operator manages this loop, allowing the agent to repeat the process until it produces a confident final answer or hits the safety cap on iterations.
Using these easy queries:
FINAL
without calling a tool.TOOL
to get some help defining the term, before routing back to FINAL
.The same pattern (LLM → SWITCH → Tool → Memory) lets you assemble almost any agent:
wiki
tool, a search
tool, and a vector-DB lookup. Route by tool_name
.HTTP
for APIs you already use (Zendesk, Jira, Slack, Stripe). Wrap side-effects in their own branches.weather_api
) and the same DO_WHILE loop.You now have a minimal, working agent in Orkes Conductor: an LLM that can think, decide to use a tool, remember what happened, and converge on a final answer safely. Start tiny, then iterate your way to powerful, specialized agents.