Table of Contents

Share on:

Share on LinkedInShare on FacebookShare on Twitter
Playground Orkes

Ready to build reliable applications 10x faster?

ENGINEERING & TECHNOLOGY

Tame Your Agent: Build a Secure, Contained Customer Success AI Agent

Maria Shimkovska
Content Engineer
June 17, 2025
5 min read

Got AI trust issues? Me too. So let’s fix that in this visual step-by-step guide.

What Can Go Wrong With AI Agents

Look, I love a good surprise as much as the next person, but not the kind where my AI agent cancels a customer subsription just to "resolve the issue faster."

AI agent are making decisions, calling APIs, and triggering real-world actions. Exciting and scary.

And what if they open a support ticket twice or pull the wrong customer's data? Or if they take a series of actions you didn’t anticipate or approve?

So the questions is: Can we build an AI agent that’s smart, helpful, and still plays by the rules YOU set?

Yes. Yes we can.

Build a Safer AI Agent in Minutes

This build-along guide shows you how easy it is to create a smart and secure AI agent using Conductor.

AI is unlocking powerful new ways to automate and solve tasks intelligently.

But with great power comes ... you guessed it, serious security concerns.

We’ll walk step-by-step through building a Customer Success AI Agent, complete with guardrails. Basically, you stay in control while it does the work.

You’ll be surprised how fast this comes together by just clicking through the Conductor UI.

What You’ll Get:

  • A video walkthrough so you can jump right in
  • A hands-on, visual guide to building your AI Agent
  • AI Agents vs Agentic Workflows

By the end of this post, you’ll have a working AI agent with decision logic, memory, guardrails, and actions, all wrapped inside a structured and visual workflow.

Not a fan of reading? Hit play. Prefer to go step-by-step? Scroll on.

So let's build something powerful and trustworthy.

💻 Let’s Build It: The “CustomerSuccessAgent” Workflow

In this demo, I’ll show you how to build a simple AI agent using the Conductor UI, no coding required. If you follow along, you’ll have the basic framework for a customer success agent that you can keep building onto.

Workflow Overview

Here are some foundational details about the workflow:

  • Workflow Name: "CustomerSuccessAgent"
  • Purpose: Create an AI agent that tries to resolve a specific customer's issue by itself, or escalate to a human if it can't.

The Workflow's Structure

This workflow is made of multiple connected tasks, each designed to do one thing well.

Illustration of the overview of the workflow we will build

Illustration of the Customer Success Agent workflow

Prerequisites (if you want to follow along)

You'll need:

That's it!

🧑‍💻 Step-by-Step: Build the Workflow in Orkes

1. Create a New Workflow

  1. Log in to the Developer Playground.
  2. In Launch Pad, select + Start from Scratch.
  3. Name the workflow CustomerSuccessAgent.
  4. Add a description if you want.

GIF of starting a new workflow through the From Scratch button and then naming the workflow.

Create new workflow

2. Add a SET_VARIABLE Task

  1. Click on the + sign under Start to create your first task.
  2. Select SET_VARIABLE as your first task.
  3. Set variable key to memory,type to Object/Array, and value to [].
  4. Click Save.

GIF of creating a new task

Add SET_VARIABLE task

3. Create a Do_While Loop

  1. Click the + sign under set_variable task to create the next one.
  2. Search and select DO_WHILE operator task.
  3. Script params: key=number, value=5, type=Number.
  4. For Loop conditions select ECMASCRIPT.
  5. Add the following code in the Code section to keep running the loop until an external condition is met. In our case, until we reach the configured max iteration limit (5).
(function () {
  return true;
})();
  1. Unselect No Limits and set the No. of iterations to keep to 5.

GIF of creating a DO_WHILE loop.

Create a DO_WHILE loop

Do not save the workflow yet. You need to create the first task inside the DO_WHILE loop to be able to save it.

4. Make first task in DO_WHILE loop

  1. Click the + button in the do_while component.
  2. Search and select LLM_TEXT_COMPLETE.
  3. Click Save.

GIF of creating a task inside a DO_WHILE loop

Create first task inside DO_WHILE loop — an LLM_TEXT_COMPLETE task

5. Configure the LLM_Text_Complete Task

You need to create an LLM integration for the LLM Provider and Model fields.

  1. On the left handside, click Integrations.
    1. Select + New integration
    2. Choose your AI/LLM integration, in our case pick Cohere (Checkout out how to set up the integration here).
    3. Name your integration: CustomerSuccessLLM.
    4. API Key: <YOUR_COHERE_API_KEY>You can get this from your Cohere dashboard, under API keys, under Trial Keys.
    5. API Endpoint: https://api.cohere.ai/v1this is the default API endpoint for Cohere.
    6. Add a description for your LLM Integration: Customer Success Agent Gemini Integration.
    7. Click Save to save the integration setup.

Cohere LLM Integration Setup

Cohere LLM Integration Setup

  1. Now that the integration is set up, pick a model
    1. Under Actions, click the + button for Add/Edit models
    2. Click New model
    3. Set Model name. Get the complete list of Cohere models. Model name has to be precise.
    4. Click Save

Cohere Model Setup

Cohere Model Setup

  1. Go back to your Workflow Definition, where you're building your workflow.
    • To get there, simply click on Workflow (under Defintions on the left handside) and pick your workflow. Click on the LLM_TEXT_COMPLETE task on the UI to select it.
  2. Set LLM provider as CustomerSuccessLLM (the one you just set up in integrations)
  3. Choose the model you set up for the same integration: command-a-03-2025

Set up LLM provider and Model fields in the LLM_TEXT_COMPLETE task

Set up LLM provider and Model fields

6. Configure the LLM Prompt Template

Now it's time to set up the Prompt template so your LLM knows what to do.

  1. Go to AI Prompts under Definitions on the left handside.
  2. Select Add AI Prompt.
  3. Set Prompt name CustomerSuccessPrompt.
  4. Choose your Model CustomerSuccessLLM:command-a-03-2025.
  5. Description: Customer Success Agent Demo.
  6. Time for the prompt! Write a prompt template to explain to your LLM what to do. You can make your own or copy mine.

This template helps your LLM act like a customer success agent, using past info to decide the best next step from a list of actions to keep the customer happy.

You are a customer success agent for a Software-as-a-Service company, handling the relationship with a particular customer. Your goal is to keep the customer happy.

This is the customer information: ${CUSTOMER_INFO}.

Here is everything that has been done so far: ${CONTEXT}. It contains a list of your previous responses, as well as the results of some actions you have taken.

The only actions you might take multiple times are `WAIT` and `ESCALATE`. Each other action should be taken AT MOST ONCE.  If an action responds with dummy data, try taking a different action instead.

The actions you have available to you are:
- `ESCALATE`: If you have low confidence on your next action or get lost, return the message `ESCALATE` to ask a human for help.
- `GET_HUBSPOT_DATA`: You can get more information about a customer by returning the string `GET_HUBSPOT_DATA` on a line by itself. In reality this will let you call a variety of hubspot APIs, but for the purposes of this demo it does not actually work.
- `GET_SLACK_HISTORY`: Get the 100 most recent messages from the Slack channel with this customer.
- `OPEN_ZENDESK_TICKET`: Open a support ticket for the customer. This action should ONLY EVER BE CALLED ONCE PER WORKFLOW. If you are thinking about opening another Zendesk ticket, ESCALATE INSTEAD.

AI Prompt Template setup

AI Prompt Template setup

In your Workflow, in the LLM_TEXT_COMPLETE task you can now set up the Prompt template to your new prompt: CustomerSuccessPrompt

  • Set CUSTOMER_INFO value to ${workflow.input} and the type to string
  • Set CONTEXT value to ${workflow.variables.memory} and the type to string

Set up Prompt template

Set up Prompt template

7. Create a new SET_VARIABLE task

  1. Click on the + sign under LLM_TEXT_COMPLETE to create a new task, still inside the the DO_WHILE loop
  2. Search for and choose SET_VARIABLE task
  3. Set up 2 variables: memory, and _merge
    • For memory, set the type as object/array and value ["${llm_text_complete_ref.output.result}"]
    • For _merge, set the type as boolean and the value checked
  4. Click Save

Create a new SET_VARIABLE task

Create a new SET_VARIABLE task

8. Create a SWITCH task

  1. Click on the + sign under set_variable_1, still inside the DO_WHILE loop
  2. Search for and select a SWITCH task
  3. In Script params add value ${llm_text_complete_ref.output.result} to switchCaseValue key
  4. Add the following switch cases:
    • Key: GET_HUBSPOT_DATA
    • Key: GET_SLACK_HISTORY
    • Key: OPEN_ZENDESK_TICKET
  5. Add the following code in the ECMASCRIPT code section to route to the appropriate switch case:
(function () {
    if ($.switchCaseValue.includes("ESCALATE")) {
      return "ESCALATE";
    }

    let actions = ["GET_HUBSPOT_DATA", "GET_SLACK_HISTORY", "OPEN_ZENDESK_TICKET"];

    for (const action of actions) {
      if ($.switchCaseValue.includes(action)) {
        return action;
      }
    }

    return "ESCALATE";
  }())
  1. Time to Save

Create a SWITCH task and add switch cases

Create a SWITCH task

This is what you should have so far:

Switch Tasks

Switch Tasks

9. Set Default case to Human task

  1. To set up a task for a switch case, click on the + sign under the SWITCH task, leading to the defaultCase.
  2. Search for and select HUMAN task
  3. Click Save.

Set up a HUMAN task, for the default switch case

Set up HUMAN task for defaultCase

10. Set up an HTTP task for OPEN_ZENDESK_TICKET

For the OPEN_ZENDESK_TICKET, you can set up an HTTP task, which (when configured with your ZENDESK API) will create the ticket for you from the workflow. For this demo, I am not setting it up to keep the demo simple. But you get the idea. You can use the HTTP task to interact with APIs. 🤗

Set up an HTTP task, for the OPEN_ZENDESK_TICKET switch case

Set up an HTTP task, for the OPEN_ZENDESK_TICKET switch case

11. Set up HTTP task for GET_SLACK_HISTORY and save the output into a variable

You can use the HTTP task here to get your slack history, and then set up a follow up SET_VARIABLE task to save the output of the HTTP task to be used in the workflow.

Here is what this would look like:

Set up an HTTP and SET_VARIABLE tasks, for the GET_SLACK_HISTORY switch case

Set up an HTTP and SET_VARIABLE tasks, for the GET_SLACK_HISTORY switch case

12. Set up HTTP task for GET_HUBSPOT_DATA and save the output into a variable

You can follow the same steps as step 10 to set up an HTTP task to get your hubspot data and then use a SET_VARIABLE task to isolate the output from that data to be used in the workflow.

The final output for the entire workflow should look like this:

A screenshot of what the final workflow looks like with all the switch cases set up

Final workflow screenshot

Wow, all done! You now have yourself a Customer Success AI Agent

AI Agents + Security: Why It Matters Now More Than Ever

Sure, AI makes things faster and smarter. But once it starts taking real actions? The stakes go way up.

You have to ask:

  • What can it see?
  • What can it do?
  • Where are the guardrails?

Without constraints, AI can expose sensitive data or take actions it shouldn’t. That’s a hard pass, especially in industries like finance, healthcare, or customer support.

Good news. You don’t have to choose between power and control. You just have to find the right balance.

With Orkes, you can build AI agents that are smarter and safer. With guardrails like:

  • Timeouts that stop agents from running too long
  • Input/output checks to make sure data stays clean
  • Audit logs so you always know what your AI agent is doing

Bottom line: Security in building AI agents is paramount. Orkes helps you harness AI without losing control. That’s the kind of automation you can actually trust.

Wait So... Is This an AI Agent or an Agentic Workflow?

The Customer Success Agent is an AI Agent.

Why? Because it works toward a goal on its own. It makes decisions along the way and adapts as needed. The path it takes can change. You don’t know exactly what it will do or when it will finish. You only know it’s moving toward its goal.

So what's the difference between AI Agents and Agentic Workflows?

  • Agentic Workflows follow a more structured path, making one-off decisions or executing predefined tasks without adapting or re-evaluating.
  • AI Agents pursue broader goals, can reason across steps, and make multiple decisions along the way.

That said, some argue the distinction doesn’t really matter. The important thing is to build useful, reliable features, whether you call them agents, workflows, or something in between.

→ Want to learn more about the difference between them? Check out this super useful article, written by our writer Liv: Agentic AI Explained: Workflows vs Agents

Here’s the TL;DR:

  • Agentic Workflows = task-focused + single-step execution
  • AI Agents = goal-driven + multi-step decision-making

So yes, our Customer Success Agent is an AI Agent built within a workflow. Blending the structure of workflows with the flexibility and intelligence of agents.

Who said you can't have it all? Not me, that's for sure. ; )

Got Questions? Want to Show Off?

If you get stuck, have questions, or just want to see how others are building their own AI agents, you're not alone.

We’ve got an entire community of builders, creators, and curious minds ready to help—and cheer you on. Already built something? We definitely want to see it.

Drop your Customer Success AI Agent build here

Whether it’s a full production setup or a scrappy weekend experiment, we’re here for it.

Final Thoughts

AI is powerful, there is no doubt about it. But power without control is a risk.

That's why the real breakthrough isn't just building AI agents at random. It's building them responsibly. That includes guardrails and transparency, with the occassional human oversight.

What you saw in this demo is a foundation for a customer facing AI agent that respects boundaries.

With an orchestration engine like Orkes Conductor, you don't need to gamble with autonomy. You can design workflows that are:

  • Flexible enough to use LLMs meaningfully
  • Auditable enough to build user trust and be complaint
  • Structured enough to avoid unwanted action

And whether you’re just exploring AI agents or already rolling them out across your organization, building your agents within a workflow offer the clarity and control you’ll need as things scale.

So go and build something bold. Just make sure it's built to be trusted.

See you in the next demo!

Orkes Conductor is an enterprise-grade orchestration platform for process automation, API and microservices orchestration, agentic workflows, and more. Check out the full set of features, or try it yourself using our free Developer Playground.

Related Blogs

Automating Serialization/Deserialization Tests with Orkes Conductor and LLMs

May 29, 2025

Automating Serialization/Deserialization Tests with Orkes Conductor and LLMs