Skip to main content

· 6 min read
Riza Farheen

In the past few years, have you ever had a day without using an application? Most probably, your answer would be ‘No’. In today’s world, we rely on mobile applications for everything, right from getting a cab, booking travel tickets, paying bills, purchasing things, and much more. And the process of purchasing/ordering things online definitely goes through a checkout process.

Checkout App with Next.js and Conductor

Yes! You heard it right. You can build an app with Conductor in 10 minutes. In this blog, I’ll walk you through the process of creating a Checkout app with Next.js and Conductor. Conductor is a platform for building distributed applications in any language of your choice. However, this article details how to build a checkout application with Next.js and Conductor.

What you need!

You need to ensure that the device on which the application is built meets the following requirements.

  • Node.js version should be >= 18
  • TCP port 3456 should be available
  • Set Up Conductor Server

Setting Up Conductor Server

The Conductor server can be set up locally on your device, or you can set up the Orkes Cloud version or even test out the Conductor in Playground, a free tool from Orkes for testing your application in real-time.

In order to run your application against a server, you need to extract the access keys from your Conductor server. The access key includes two parts; Key ID and Key Secret.

So, let’s obtain the Access Keys from the Conductor Server.

You need to create an application inside your Conductor server, from where the Key ID and Secret are to be obtained. Ensure that you have provided the worker role for the application and then generate the access key. These access keys would be shown only once, so ensure to copy and keep them securely.

Now, export your variables as below;

# set the KEY and SECRET values with the one obtained from the Conductor UI after creating an application
export KEY=
export SECRET=
# replace CONDUCTOR_SERVER with the actual hostname, the URL must end with /api
export SERVER_URL=http://CONDUCTOR_SERVER/api
# Optional checkout workflow name defaults to MyCheckout2
export CHECKOUT_WF_NAME=MyCheckout2

Once you have verified this, let’s move on to the next step in building your Next.js application.

Run the Application

Checkout the application code from:

https://github.com/orkes-io/conductor-nextjs-example

You need to install the dependencies initially.

yarn
yarn seedWf

Now start the app in development mode.

yarn dev

In order to use the app in the browser, open https://localhost:3456/

On your browser, the application will look like this:

Checkout app in UI

You can choose the products and add them to the cart. Once the cart is ready and when the user clicks on the PLACE ORDER button, the Conductor workflow begins.

Where is Conductor used in your Checkout application?

When you put your orders in the cart and proceed to the checkout, the application needs to calculate the total credit with the available credit and check whether the user has a credit. To do this, you can make use of a workflow in Conductor.

Let’s consider a simple workflow of a Checkout Application. The applications are built using Workflows in Conductor. Workflows are a combination of several building blocks known as tasks. These tasks orchestrate in a specified order to complete the workflow and provide durability across the flow, so even if the system goes down or there are temporary failures, the process is guaranteed to be complete - without having to write any extra code or logic!

Checkout process workflow in Conductor

  1. When started, we have a WAIT task that waits for 30 seconds, which allows users to cancel the order from the UI - this is useful for the demo but may not be needed in the production environment.
  2. After that, we have a check_credit task that checks if the user has sufficient balance to place the order - this is implemented as INLINE javascript since it's a quick check.
  3. Next up, we have a decision task switch_has_credit, that takes the output of check_credit and either completes the process successfully or terminates with an insufficient balance error.

As you move the system into production, these three tasks can be implemented to run real production code or mocked up when running tests, all without changing your NextJS application code.

Not only that, for the folks who are monitoring the checkout application in production (think operations/customer support), they know exactly what is going on with each order.

Here is the snippet of the workflow code used:

const createCheckoutWorkflow = () =>
workflow(`${process.env.CHECKOUT_WF_NAME || "MyCheckout2"}`, [
waitTaskDuration("confirmation_wait", "30 seconds"),
generateInlineTask({
name: "check_credit",
inputParameters: {
products: "${workflow.input.products}",
totalCredit: "${workflow.input.availableCredit}",
expression: function ($) {
return function () {
var totaAmount = 0;
for (var i = 0; i < $.products.length; i++) {
totaAmount = $.products[i].price;
}
return totaAmount > $.totalCredit ? "noCredit" : "hasCredit";
};
},
},
}),
switchTask("switch_has_credit", "${check_credit_ref.output.result}", {
noCredit: [
terminateTask(
"termination_noCredit",
"FAILED",
"User has no credit to complete"
),
],
hasCredit: [
terminateTask(
"termination_successfull",
"COMPLETED",
"User completed checkout successfully"
),
],
}),
]);

For the complete working code, see the code here.

Visualizing your Checkout execution in Conductor

Conductor also provides a visual representation of the workflow with the paths taken. The green tick along the boxes represents that the task has been completed. If any of your tasks are still running, it will show a loading icon instead of the green one. This could help in quickly understanding how your application works.

In situations where your application is stuck, you can visualize them and troubleshoot the issues by looking into the path.

Execution of the Checkout process

Wrapping Up

And that’s it! Your application is now ready.

Why don’t you build an app on your own and share your app development stories with us?
We are waiting to hear from you!

Our team at Orkes is always here to help if you have any queries. Do reach out to us on our Slack channel for any help. If you are an enterprise looking to leverage Conductor for app-building processes, you can reach out to us.

· 8 min read

Editor’s Note: This post was originally published in Sep 2022 in Medium.

Workflow Start Request

Netflix Conductor is a well known platform for service orchestration allowing developers to build stateful applications in the cloud without having to worry about resiliency, fault tolerance and scale. Conductor allows building stateful applications — automation as well as human actor driven flows by composing services that are mostly stateless.

At Orkes, we regularly scale Conductor deployments to handle millions of workflows per day and we often get questions from the community on how to scale the Conductor and if the system can be scaled to handle billions of workflows.

TL;DR

We ran a benchmark to test the limits of Conductor’s scalability using a simple single node setup on a laptop. The results were not so surprising, given our experience with Conductor, — with the right configuration, Conductor can be scaled to handle the demands of really large workloads.

Throughput

Throughput

Latency

Latency

Benchmarks

Throughput

We measured the peak stable throughput Conductor can achieve on the following parameters:

  • Number of workflow start request per second
  • Number of task updates per second — translates to no. of state transitions happening per second
  • Number of workflows completing per second with moderate complexity
  • Amount of data processed per second

Latency

We measured p50, p95 and p99 latencies for the critical APIs in Conductor:

  • Workflow Start
  • Task Update

Benchmarking Tools

We used wrk2, a fantastic tool to generate stable load on the server. Wrk2 improves on wrk and adds the ability to generate sustained load at a specific rate (-R parameter).

We created a load testing workflow which is complex enough with a total of 13 steps.

For the experiment a set of load testing workers were created that were used to poll for the tasks in the workflow, and produce a dummy output.

Monitoring

Conductor has a pluggable metrics system and we used Prometheus to capture and Grafana to visualize various metrics.

The Setup

The experiment was designed to send a sustained workload of 200+ workflow executions / sec. The test workflow also embedded a sub-workflow with a single step inside, so during the experiment, we were starting approximately 400+ workflow/sec.

For the Conductor version, we used Orkes’ build of Conductor that is tuned and customized to handle large workflows with minimal latency possible and optimize the operational costs.

We have a version of this available under open source at https://github.com/orkes-io/orkes-conductor-community.

Conductor servers are a horizontally scalable system, for this test we ran a single node server — of course your production setup should have at-least 3 nodes in multiple availability zones / racks for higher availability.

Hardware

The Conductor server was running on a Macbook Pro with M1 Max CPU and 64GB of RAM. The same machine also ran a single node redis server. Postgres database was running on another Macbook Pro with M1 Max CPU and 32GB of RAM. The same machine also ran task workers. A Core i9 Macbook pro with 32 GB RAM ran Prometheus, Grafana, a load generator docker container and task workers.

The systems communicated over WiFi 6 network — no wired connectivity, which potentially could have improved latencies a bit. The machines were roughly equivalent to a c7g.2xlarge instance type of AWS (memory consumption in the tests was below 16GB offered by this instance type).

Peak Workflow Start Requests

Testing the limits of how many workflows can be started in a burst. For this experiment we wanted to remove any network latencies so we ran the wrk on the same host as the Conductor server running thereby removing any network latencies out of equation. This gives us a theoretical max request/sec assuming the network is limitless (which it isn’t’).

Workflow Start Requests / sec. Under the normal load the no. of start requests averages at about 1.8K/sec

Latencies under high load with ~2K request/sec

Workflow Completion Metrics

For this experiment, we asked a question:

If all the tasks in a workflow were instantaneous, how many workflows can we start and complete in a second with a sustained load such that there is minimal backlog of the tasks.

To test this, we used wrk2 to send a sustained load of 210 workflow execution requests, where each workflow contains a sub workflow and worker tasks.

Workflow Execution Graph

Workflows getting completed per second

Task Level Metrics

Conductor publishes the depth of the pending task queue, this is useful when deciding when to scale up/down your worker clusters. Here is the snapshot of the worker queue depth. High numbers for a given task indicates the worker resource starvation and need to scale the worker cluster to fulfill the demand.

Pending queue size of tasks at a given point in time. Sustained high numbers indicate worker starvation and a need to scale out workers.

We achieved a consistent throughput of 1450+ task executions / sec. This included polling for the task, executing the business logic (in our case producing randomized test output) and updating the task status on the server by workers. Each successful task completion initiates the state transition of the workflow.

Number of worker tasks getting updated per second.

Critical to the throughput is the update of the task execution back to the Conductor server. This is the most critical API operation we found that is directly responsible for the throughput of the server. We optimized the server to limit the tail latencies ensuring p99 stays well under the check.

Task update from worker latencies in milliseconds.

Task Poll

Conductor uses a long-poll for the task polling, and the request waits until there is a task available for the worker or the timeout, with the timeout set to 100 ms in this experiment. Polling implements batching for more efficient use of the network and connection. In the test, the batch size was set to 10 for the tasks by workers.

Latencies for task poll

Data Processed by Workflow

We generated fake data for the test to simulate the real world scenarios in terms of data transfer. The amount of data being processed by workflows impacts the requirements for provisioned IOPS (on cloud environments) and Network throughput requirements.

The experiment averaged a sustained rate of ~80 MB/sec data processed.

Amount of data (task inputs and outputs) being processed at a given point in time.

Scaling to handle billions of workflows / month

The experiment used a total of 3 commodity hardware (roughly equivalent to c7g.2xlarge instance types on AWS). Throughout the experiment JVM heap size consistently remained below the 2GB mark.

The experiment created a workload of 210 moderately complex workflows per second, which if run constantly for a month will generate about 540M workflows.

Conductor Servers

Conductor servers themselves are stateless and can be scaled out to handle larger workload demands. Each of the server nodes can handle the workload based on the available CPU and Network and can be scaled out to handle larger workloads.

Redis

Redis serves as the backbone for state management and queues used by servers to communicate. Scaling higher workload requires either scaling up redis or using redis cluster to better distribute workload.

Postgres

Postgres is used for indexing of workflow data. Beyond the disk storage requirements, scaling postgres requires two factors 1) adequate CPU and 2) IOPS required (especially on cloud environment) to handle the writes under heavy workloads. Writes to postgres are asynchronous done using durable queues (check out orkes-queues) but longer delays means completed workflows remain in Redis for longer periods of time, requiring larger Redis memory.

Conclusion

We often get asked, can Conductor be scaled to handle billions of workflows per month? The answer is a resounding YES it can. If you would like to give it a try, check out our developer playground at https://play.orkes.io/.

Orkes, founded by the founding engineers of Netflix Conductor, is a fully managed service offering Conductor as a hosted service in the cloud and on-prem. Checkout our community edition for a fully open source version of Orkes stack.

If you are an enterprise and have a use case that you would like to run a PoC, please reach out to us.

Don’t forget to give us a ⭐️ https://github.com/Netflix/conductor

· 3 min read
Riza Farheen

The digital payment sector witnessed a drastic surge after the pandemic hit the world in 2019. It also led to an increasing rate of fraudulent transactions, which caused the banking institutions or financial sectors to invest more time and workforce to look into settling these disputes.

Since manual human interventions in settling disputes can cause delays, you might be looking into solutions where the right tooling can quickly resolve this. One way to do this is to leverage a microservice orchestration platform like Conductor, which can help in building applications that aid in quickly resolving your customer issues.

Let’s look closely at how Conductor helps minimize your workload.

Building Fraud Transaction Dispute Application

Applications are built by creating workflows in Conductor, that orchestrate the flow of your business logic. In this case, a fintech application would require a workflow to settle the transactions. A sample business flow may look like this:

Business logic of a fintech application

And you can achieve this via a Conductor workflow, as shown below.

To quickly transform your business logic into your application, you need a workflow like this, built upon combining several tasks and operators.

Conductor Workflow executing your business logic

You may build this application on any platform, but let’s look at the 5 major reasons why you should choose Conductor.

Why Conductor?

1. Native Support for Retries

Conductor has in-built support for handling retries which makes your application resilient. The tasks/workflows in Conductor can be configured to handle failures, timeouts, and rate limits. So even if your application fails at any point, the app retries the failed operations, thus helping in seamlessly continuing your business operations.

2. Conductor is Scalable

Applications you build will have future additions based on the changes in the transaction disputes you may encounter. Conductor helps scale your application by adding more tasks as and when required. It simplifies the feature development process for the developers, thus making the new features available to the application soon.

3. Debugging your Code is Easier

A developer’s most challenging phase is debugging the code rather than building an application. A lot of time and effort is to be spent on debugging the hundreds and thousands of code lines. Using Conductor, the debugging is a lot easier where you need to debug the particular block alone; thus, it reduces the overall debugging time and increases your firm's productivity.

4. Visualize your App as Workflow

You can visualize your workflows as several building blocks, which makes them less cluttered when compared to code snippets. The concepts like sub-workflow can be utilized where another workflow is called into your existing workflow, which makes your workflow less cluttered and easier for debugging.

5. Language Agnostic

And finally, it’s your app, and the language to be built is your choice. Conductor being language agnostic, you can develop your application in your preferred language.

Summing Up

All the features mentioned above address your primary concern about building an application to handle fraudulent transaction disputes, but there is much more to it.

If your current approach to application development needs to be refreshed, it’s high time to implement a workflow orchestration platform like Conductor. Let’s kick start your journey towards Conductor now.

Meanwhile, try out Playground, a free tool from Orkes, to test out Conductor in real-time. You can also reach us at our Slack channel for any queries.

· 2 min read
Riza Farheen

Finally, we wrapped 2022 - a year full of happenings. This blog post is intended to recap what the Orkes team has been up to in December - the most wonderful time of the year.

We concluded the year with a few significant developer events. It’s been an incredible year, and thank you for being a part of our community. Let’s keep growing together, and wish you all that 2023 be a year full of exploration, discovery, and growth.

Happy New Year!

Orkes Newsletter: Dec 2022 Highlights

Read on to know more about our monthly highlights.

Product Updates

Start Workflow

Dec 1, 2022

We’re delighted to announce yet another add-on to our operator tasks - Start Workflow. The Start Workflow task is used to start another workflow from your existing workflow and is not dependent on its completion. Unlike the sub-workflow task, this task won’t wait for the started task to complete. Learn more.

Event Updates

DevFest Goa

Dec 18, 2022: Goa, India

Hosted by Google Developers Group (GDG) Goa, DevFest Goa was an opportunity for tech minds around the globe to meet and work with several open-source developer resources and products. Our Developer Relations Engineer Cherish Santoshi represented Orkes and addressed the gathering on how to build distributed applications 10x faster using Conductor.

DevFest Noida

Dec 11, 2022: Noida, India

DevFest Noida, hosted by GDG Noida, was one of the biggest tech conferences in the Noida region. With over 350+ attendees, the crowd was overwhelming and our Developer Relations Engineer, Cherish Santoshi, delivered a talk that provided deep insights on how Conductor can be used to build stateful and resilient applications.

Google Cloud for Startup Community

Dec 8, 2022: California, United States

We had the privilege to be a part of Google Cloud for Startup Community, where our CTO, Viren Baraiya, was invited to share his learnings on building Orkes. Viren, as a panelist, addressed a crowd of 400 early-stage founders and venture capitalists on how Conductor helps startups to scale using cloud services across any provider.

· 4 min read
Riza Farheen

2022 was an incredible year for Orkes. We witnessed a remarkable growth of the community along with a much higher adoption for Conductor. Definitely, the backbone of this lies with our developer events. We’re extremely grateful to all the attendees who made these events a great success.

In case you missed the DevFest events, we have compiled the key highlights for a quick catch-up.

Hosted by Google Developers Group (GDG), DevFests are global community-driven tech conferences. They are put forward with the idea of building a strong developer community who are proficient even with the latest technological advancements.

With over hundreds and thousands of attendees, each event is a networking platform for developers to connect locally, learn and build on various tools.

An Event-ful year for Orkes

We began our DevFest journey for the first time at DevFest Nagpur in India on Nov 5-6, 2022. DevFest Nagpur is an initiative by GDG Nagpur primarily focussed on networking, knowledge transfer and learning about technologies.

The DevFest season for November concluded with DevFest Singapore and DevFest Bali on Nov 26 & 27, respectively. GDG Singapore hosted DevFest Singapore in partnership with Women Techmakers Singapore and GDG Cloud Singapore. DevFest Bali was hosted by GDG Bali in collaboration with Google Developer Student Cubs (GDSC) Bali and Women Techmakers (WTM) Bali.

In December, we were at DevFest Noida and DevFest Goa on the 11th & 18th, respectively. DevFest Noida was hosted by GDG Noida and was one of the biggest tech conferences in Noida. DevFest Goa, hosted by GDG Goa, was an opportunity for tech enthusiasts to meet, discuss and work with several open-source developer resources and products.

And... Ta-da! We’re done with our ✅ To Do list for 2022.

To do list for Orkes in DevFests

We are extremely overwhelmed by the interaction we had with the outstanding crowds at each of the DevFest. A massive shoutout to our Developer Relations team for this great success.

Building Distributed Applications 10x Faster

We were represented at the DevFests by Cherish Santoshi, our Developer Relations Engineer, who delivered talks on building distributed applications 10x faster using Conductor.

Orkes at DevFest Noida 2022

The talk revolved around the age of monoliths, monolithic challenges, and how it moved from monoliths to microservices. Cherish also provided a context on how big tech giants like Tesla, Netflix, GE, JP Morgan, etc., simplified their distributed systems architecture, & built mission-critical applications 10x faster by leveraging Netflix Conductor.

What makes Conductor stand out in building applications faster are the stories we talk about at every conference. Conductor has many built-in features that allow functionalities such as automating recurring tasks/retrying logic and counts/CI-CD pipelines and re-using error handling/services. And overall, what makes Conductor unique is that apps can be built using any language, any cloud, and any scale.

The talk led to significant interactions with the audience, where several questions on using Conductor in various industries and use cases were raised.

Now you may have an inner feeling to try out Conductor right now. Then, here we go! You can leverage our Playground, which is a free tool for you to play around with Conductor. Please look at our documentation for more details on Playground.

Swags, Swags and more Swags!

The show was not merely concentrated on the talks alone, but we made it more exciting by providing swags to the audience. The people were chosen based on how interactive they were, and we delivered swags to the top 50 interactive ones. Congratulations guys!

Here’s what they say about us!

What people had to say about Orkes at DevFests

Our Developer Relations team was highly overwhelmed by the outcome of these events.

Quote on DevFest from DR

And that’s a wrap. We officially concluded with DevFest Goa for the year. We are so glad that we could be a part of the DevFest events, which helped our fellow developers to learn about Conductor.

A big thank you to all our dear attendees. This couldn’t have been possible without you guys. Let’s meet and greet at DevFests in 2023.

· 5 min read
Riza Farheen

Many exciting things are happening as we work closely with our customers and the broader developer community. This blog post is intended to recap what the team at Orkes has been up to during the past few weeks. We will be covering two broad categories - product updates and community engagement. Do let us know your thoughts and feedback by reaching out to us on our Slack channel or by setting up a meeting with us.

We kick-started Q4 2022 with a set of great events, including but not limited to KubeCon + CloudNativeCon22, IBC22, and Orkes x MongoDB meetup. In case you couldn’t make it to these events, we’ve got you covered - more events are on the way!

Now let’s go through the product and events updates from Orkes.

Product Updates

Webhook

Nov 20, 2022

Integration of different systems with Conductor can be enabled using Webhooks. You can now seamlessly send real-time updates from third-party systems, such as Slack, Twitter, etc., to Conductor via Webhook. Webhooks can be used to listen for a particular type of event to occur, or you can also start a new workflow when an event occurs. Learn more.

Metadata Migration

Nov 17, 2022

We’re pleased to announce metadata migration that facilitates moving workflows and task definitions between various environments. This comes into use when workflows and tasks need to be tested before deploying into the production environment. Once testing is finished, you can then easily move the definitions from development to testing to production environments.

HTTP Poll Task

Nov 10, 2022

The HTTP Poll task is a powerful mechanism to check whether certain events occur in an external system. It is a smart polling mechanism that invokes the HTTP API, parses the response, and evaluates conditions (until the specified condition matches) to infer whether further polling is required. Learn more.

GraalJS for Inline Task

Oct 24, 2022

Ever since the launch of the Inline task, we have been receiving frequent requests to enable the GraalJS evaluator type. You can now use the GraalJS evaluator type while configuring Inline tasks in your workflows. It can be used to evaluate Javascript expressions using GraalJS. Learn more.

Event Updates

DevFest Bali 2022

Nov 27, 2022: Bali, Indonesia

With the spirit of the DevFest season being continued, the team at Orkes was represented by our Developer Relations Engineer, Cherish Santhoshi at DevFest Bali 2022, where he delivered a talk on App Modernization via Orchestration.

DevFest Singapore 2022

Nov 26, 2022: Singapore

The DevFest Singapore 2022, hosted by GDG (Google Developers Group) Singapore, was a platform for developers around the world to connect and network with. We had Cherish Santoshi, our Developer Relations Engineer, who addressed the gathering on how developers can build applications across various cloud environments, microservices & languages.

DeveloperWeek Enterprise 2022

Nov 16-17, 2022: Virtual

In the 2-day virtual conference at DeveloperWeek Enterprise 2022 with over 3000+ attendees, we presented the opening talk, ‘Building a SaaS platform using Orchestration’, which was delivered by Orkes CTO Boney Sekh.

DevFest Nagpur 2022

Nov 5-6, 2022: Nagpur, India

Hosted by Google Developer Groups (GDG), DevFest is one of the largest developer conferences. Our Developer Relations Engineer, Cherish Santhoshi, represented Orkes by delivering a talk on how big tech giants like Netflix & Tesla use orchestration to build resilient & scalable applications.

API World 2022

Oct 25 - Nov 3, 2022

API World 2022 was both a virtual and in-person event. Our CEO, Jeu George delivered a workshop on Building an API Orchestrator at the in-person event in San Jose, CA, and our Developer Relations Engineer, Cherish Santhoshi, delivered a virtual talk on Building Resilient Applications Using Orchestration.

KubeCon and CloudNativeCon 2022

Oct 24-28, 2022: Detroit, Michigan, North America

We marked our presence at KubeCon + CloudNativeCon22 with our CTOs, Viren Baraiya, and Boney Sekh. We had highly productive conversations with engineers, product managers, executives, and various others from organizations across the globe on how to orchestrate their business issues using Conductor workflows.

Orkes x MongoDB Meetup

Oct 15, 2022: Hyderabad, India

Another exciting event where we connected deeply with developers was at the Orkes x MongoDB meetup in Hyderabad, India. This event, co-hosted by Orkes & MongoDB, gathered together the developer community from across India, who were able to gain insights into Orkes and Conductor platform along with some great integration points between Orkes and MongoDB.

IBC2022

Sep 9-12, 2022: Amsterdam, Netherlands

With over 37000+ attendees from 170 countries all over the world, IBC2022 was a great opportunity for the team at Orkes to meet with customers and industry experts in the media space. Our CEO, Jeu George, and CPO, Dilip Lukose, attended the event to showcase how Conductor can be custom tailored to meet various organizations' workflow needs.

DeveloperWeek Cloud 2022

Sep 7-14, 2022: Austin, United States

One of the earliest fests the Orkes team has attended this summer was DeveloperWeek Cloud 2022. Our CTO, Viren Baraiya, delivered an in-depth talk about orchestrating workflows using Conductor.

· 4 min read
Riza Farheen

Ever since the launch of Orkes, our developer community has been spectacular. We were overwhelmed to witness the steady growth and keen engagement of developers interested in Conductor over this short span of time. Of course, this called for a celebration with our community, and we proudly organized our first-ever hackathon, The Orkes Hack, from March 2022 - July 2022.

With over 5000+ developer registrations from all over the world and their outstanding contributions, the response was incredible. We’ve lined up some highlights from Orkes Hack for you!

Orkes Hackathon

About Orkes Hack

The Orkes Hack was a virtual event that commenced on March 28, 2022, and continued till July 24, 2022. We had 5145 attendees, with team sizes ranging between one and four. The hackathon utilized Orkes Playground, a free, fully managed browser-based sandbox environment of Conductor.

Themes for Orkes Hack

The theme for the event was categorized into four groups with minimum criteria for the submission.

  • Open Innovation

The developers were free to build any innovative workflow/app that showcases the use of Conductor in solving a real-world problem.

  • From Devs, for Devs

This was intended for collaboration and network learning. The developers were free to create tools that can automate their day-to-day tasks, such as javascript beautifier, auto-upload to S3 buckets, S3 threat scan, and much more.

  • Riding the NFT wave

Riding the NFT wave theme revolves around building a workflow around minting, generating different file types, uploading, listing, and other marketplace operations involving NFTs (Non-Fungible Tokens), where NFTs are cryptographic tokens that exist on blockchains and cannot be replicated.

  • Most creative workflows

Developers had to create workflows wiring up various tasks and operators that could present the business logic of an application.

While the excitement of our hacker community was primarily centered on building cool applications and having fun while at it, there was also a $3500 price pool that made it even more exciting!

The great submissions from the community during the hackathon include the workflows that can solve issues in compliance and security, E-agreements, healthcare, file management, NFTs, and more. The more exciting part is that these developers rapidly learned to build on Netflix Conductor from scratch.

Hackathon is where ideas transform into reality or where disruptive solutions are brainstormed. The Orkes Hack also witnessed immense competition bringing forward brilliant ideas utilizing Netflix Conductor. Finally, after over 100+ days of hustle and bustle, our judging panel, including the founding team of Orkes, who was also the original team behind the open-source Netflix Conductor, picked the top 3 hacks whose contributions were remarkable.

Winners from the Orkes Hack hackathon

The first prize was bagged by the team ComplianceForce for GDPR and PII Compliance Workflow under the theme Most creative workflows. The first runner-up was the team Madanhitansh239 for NFT Marketplace Workflow under the theme Riding the NFT wave. And the second runner-up was the team Mrdevops for PDF Operations under the theme Open Innovation. Congratulations to all our winners and the participants who immensely worked and contributed to the Conductor platform.

You can be a hacker too and easily build applications on Conductor! Sign-up for Orkes Playground and start building your workflows now - no setups and no payments needed!

Do reach out to us on our Slack, Discord, and GitHub communities for any queries. We are always happy to assist you with questions or advice related to using Netflix Conductor or application development in general.

We had a great time hosting the Orkes Hack and we would like to extend our gratitude to all the attendees for making this event a great success. We will be back soon with more events.

Stay tuned!!!

· 2 min read

Our customers run some of their core and mission-critical workloads on Orkes. Trust from these customers is an earned privilege and it is Orkes’ top priority that we maintain and build upon this fundamental value. One key pillar of trust is operating with high levels of security across multiple dimensions, such as hardened infrastructure, secure software development & operational processes and in-depth auditing by third parties against industry-leading benchmarks.

At Orkes, it is our core tenet to have world-class security embedded in everything we do so that our customers’ workloads and their data are safe when using Orkes products and services. That is why we embarked on attaining multiple levels of SOC 2 compliance - the most fundamental security certification that our customers value. And I’m thrilled to announce that after a thorough audit by an independent auditor spanning multiple months of the observation period, we are now SOC 2 Type 2 compliant!

Orkes is SOC 2 Type 2 Compliant

How does SOC2 Type 2 compliance help customers?

Service Organization Control (SOC), developed by the American Institute of Certified Public Accountants (AICPA), defines how a company handles sensitive data. SOC 2 is designed for service providers who store customer data in the cloud. This rigorous, independent assessment of our internal security controls verifies our adherence to the highest standards across all aspects of our technology and business.

If you are an Orkes customer or considering being one, rest assured that your workloads and data will be in a cloud ecosystem that is fully compliant with SOC 2 Type 2 standards. This also means that if you require SOC 2 Type 2 compliance for yourselves, using Orkes Cloud for your microservices and workflow orchestration use cases ensures you will be able to achieve and maintain that compliance.

Ongoing Commitment

The team at Orkes has put a lot of hard work into this effort and we will continue to do more of that and work towards future compliances that are important for our customers. If you are interested in learning more about how security is fundamental to Orkes and is put into practice across the company or to learn more about Orkes Conductor delivered as a cloud service in this secure environment, please reach out to us for a meeting and demo or join our community Slack space!

· 3 min read
Riza Farheen

Previously, sending real-time updates from third-party services to Conductor was back-breaking. And after tons of scrums and brainstorming sessions, we are finally launching the Webhook integration for Conductor.

With the latest version of Conductor, you can now seamlessly integrate Conductor with other third-party services, such as Stripe, Zendesk, Slack, Twitter, and much more, using Webhooks.

Integrating Webhook with Conductor

Now, what is a Webhook?

Webhook is a callback function based on HTTP that accelerates the connection between the Conductor and other third-party systems. It paves the way to receive data from other applications to Conductor.

And what does this integration do?

You can leverage Webhook to create integration patterns for Conductor workflows. It can be used to create workflows that act on events occurring outside the Conductor. In addition, we’ve added an option to trigger other workflows based on the events received from Webhook. You can enable this while creating the Webhook so that once the Webhook event comes, this workflow gets triggered automatically, thus helping to streamline more processes.

While creating the workflows, you can identify the Webhook task type as ‘WAIT_FOR_WEBHOOK’.

We currently support Webhook integration for GitHub, Slack, Twilio, Stripe, Pagerduty, Zendesk, Twitter, Facebook, and Sendgrid. Apart from that, we’ve included an option called Custom that allows you to integrate Conductor with any third-party systems.

Here’s how you can configure this in Conductor!

  1. Create the workflow to receive the Webhook event.

Creating workflow to receive events from Webhook

  1. Create Webhook and verify the Webhook URL.

Create Webhook in Conductor

  1. Run the Workflow.

Run workflow in Conductor

  1. Complete the requested action from the external system, and the workflow gets completed successfully.

A completed workflow after receiving events from Webhook

For example, suppose the external system is GitHub, and the requested action is replying with the comment "We’ll get back to you soon!" on every issue creation. In that case, once an issue is created in your GitHub repository, the workflow gets completed successfully.

Wanna know in detail about the configuration steps? Have a look at our documentation on Integrating Conductor with other systems using Webhook.

I can’t wait to see what you build!

Do try out our new add-on, and you can always reach us at our Slack channel for any queries! We’re always happy to help.

Cheers!

Riza Farheen
Senior Technical Writer
Orkes Inc

· 20 min read
Riza Farheen
James Stuart

What do you do when you’re hungry and there is no way to cook food? Definitely, you rely on a food delivery application. Have you ever wondered how this delivery process works? Well, let me walk you through the process of how Conductor helps in orchestrating the delivery process.

In this article, you will learn how to build a delivery workflow using Conductor, an open-source microservice and workflow orchestration framework. Conductor handles the process as a workflow that divides the delivery process into individual blocks. Let’s see this in action now!

Delivery Workflow

Consider that we get a request in the delivery app to send a package from an origin to a destination. The application has the details of both the registered clients and riders. It should connect the best-fitting rider to deliver the package. So, the application gets the registered riders list, picks the nearest riders, and lets them compete to win the ride.

Looks simple? Yeah! Conductor comes into action here! You can simply build your delivery application by connecting small blocks together using Conductor.

What you need!

  • A list of registered riders.
  • A way to let our riders know they have a possible delivery.
  • A method for our riders to compete or be the first to select the ride.

Building the application

Let’s begin to bake our delivery app. Initially, we need to get some API calls for processes such as getting the riders list, notifying the riders, etc. We will make use of the dummy JSON that provides us with fake APIs.

So, in this case, we will use the user API for pulling our registered riders, and for notifying the rider about a possible ride, we will use the posts API.

Since we are creating this workflow as code, instead of using the workflow diagram, let's try to start with the test and build our workflow app from scratch. For the demonstration purpose, we will be using Orkes Playground, a free Conductor platform. However, the process would be the same for Netflix Conductor.

Workflow as Code

Project Setup

First, you need to set up a project:

  1. Create an npm project with npm init and install the SDK with npm i @io-orkes/conductor-javascript.
  2. You'll need to add jest and typescript support. For this, copy and paste the jest.config.js and tsconfig.json files into your project in the root folder. Then add the following devDependencies as a separate JSON file:
"scripts": {
"test": "jest"
},
"devDependencies": {
"@tsconfig/node16": "^1.0.2",
"@types/jest": "^29.0.3",
"@types/node": "^17.0.30",
"@types/node-fetch": "^2.6.1",
"@typescript-eslint/eslint-plugin": "^5.23.0",
"@typescript-eslint/parser": "^5.23.0",
"eslint": "^6.1.0",
"jest": "^28.1.0",
"ts-jest": "^28.0.1",
"ts-node": "^10.7.0",
"typescript": "^4.6.4"
},
  1. Run yarn to fetch them.

So, now you’ve created your project. As we are creating the workflow as code, next, let's create two files; mydelivery.ts and mydelivery.test.ts. By writing our code along with the test, you will get instant feedback and know exactly what happens with every step.

Creating Our Workflow

Let’s begin creating our workflow. Initially, we need to calculate the distance between the two points, i.e., the rider and the package to be delivered. We leverage this distance to calculate the shipment cost too. So let's create a workflow that can be reused in both situations.

Let the first workflow be calculate_distance that outputs the result of some function. So in our mydelivery.ts, let's update the following code:

import {
generate,
TaskType,
OrkesApiConfig,
} from "@io-orkes/conductor-javascript";

export const playConfig: Partial<OrkesApiConfig> = {
keyId: "your_key_id",
keySecret: "your_key_secret",
serverUrl: "https://play.orkes.io/api",
};
export const calculateDistanceWF = generate({
name: "calculate_distance",
inputParameters: ["origin", "destination"],
tasks: [
{
type: TaskType.INLINE,
name: "calculate_distance",
inputParameters: {
expression: "12",
},
},
],
outputParameters: {
distance: "${calculate_distance_ref.output.result}",
identity: "${workflow.input.identity}", // Some identifier for the call will make sense later on
},
});

Now in our test file, create a test that generates the workflow so we can look at it later on the Playground.

import {
orkesConductorClient,
WorkflowExecutor,
} from "@io-orkes/conductor-javascript";
import { calculateDistanceWF, playConfig } from "./mydelivery";

describe("My Delivery Test", () => {
const clientPromise = orkesConductorClient(playConfig);
describe("Calculate distance workflow", () => {
test("Creates a workflow", async () => {
// const client = new ConductorClient(); // If you are using Netflix conductor
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
await expect(
workflowExecutor.registerWorkflow(true, calculateDistanceWF)
).resolves.not.toThrowError();
console.log(JSON.stringify(calculateDistanceWF, null, 2));
});
});
});

Now, run npm test.

We have just created our first workflow, which basically prints the output of its task. If you look at the generated JSON, you'll notice that there are some additional attributes apart from the ones we’ve given as inputs. That's because the generate function will generate default values, which you can overwrite later. You'll also notice that I’ve called this "${calculate_distance_ref.output.distance}" using the generated task reference name. If you don't specify a taskReferenceName, it will generate one by adding _ref to the specified name. To reference a task output or a given task, we always use the taskReferenceName. Another thing to notice is the true value passed as the first argument of the registerWorkflow function. This flag specifies that the workflow will be overwritten, which is required since we will run our tests repeatedly.

Let's create a test to actually run the workflow now. You can add the origin and destination parameters previously known by the workflow definition (Workflow input parameters). We are not using it for now, but it is relevant in the further steps.

test("Should calculate distance", async () => {
// Pick two random points
const origin = {
latitude: -34.4810097,
longitude: -58.4972602,
};

const destination = {
latitude: -34.4810097,
longitude: -58.491168,
};
// const client = new ConductorClient(); // If you are using Netflix conductor
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
// Run the workflow passing an origin and a destination
const executionId = await workflowExecutor.startWorkflow({
name: calculateDistanceWF.name,
version: 1,
input: {
origin,
destination,
},
});
const workflowStatus = await workflowExecutor.getWorkflow(executionId, true);

expect(workflowStatus?.status).toEqual("COMPLETED");
// For now we expect the workflow output to be our hardcoded value
expect(workflowStatus?.output?.distance).toBe(12);
});

Now, run yarn test, and great, we have our first workflow execution run!

Calculating Actual Distance

Next, we need to calculate the actual or approximate distance between the two points. To get the distance between two points in a sphere, we could use the Haversine formula, but since we don't want a direct distance (because our riders can't fly :P), we will implement something like Taxicab geometry.

Calculating distance using an INLINE Task

An INLINE task can be utilized in situations where the code is required to be simple. The INLINE task can take input parameters and an expression. If we go back to our calculate_distance workflow, it takes no context and returns a hard-coded object. Now, let’s modify our inline task to take the origin and destination to calculate the approximate distance.

export const calculateDistanceWF = generate({
name: "calculate_distance",
inputParameters: ["origin", "destination"],
tasks: [
{
name: "calculate_distance",
type: TaskType.INLINE,
inputParameters: {
fromLatitude: "${workflow.input.from.latitude}",
fromLongitude: "${workflow.input.from.longitude}",
toLatitude: "${workflow.input.to.latitude}",
toLongitude: "${workflow.input.to.longitude}",
expression: function ($: any) {
return function () {
/**
* Converts from degrees to Radians
*/
function degreesToRadians(degrees: any) {
return (degrees * Math.PI) / 180;
}
/**
*
* Returns total latitude/longitude distance
*
*/
function harvisineManhatam(elem: any) {
var EARTH_RADIUS = 6371;
var a = Math.pow(Math.sin(elem / 2), 2); // sin^2(delta/2)
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); // 2* atan2(sqrt(a),sqrt(1-a))
return EARTH_RADIUS * c;
}

var deltaLatitude = Math.abs(
degreesToRadians($.fromLatitude) - degreesToRadians($.toLatitude)
);
var deltaLongitude = Math.abs(
degreesToRadians($.fromLongitude) -
degreesToRadians($.toLongitude)
);

var latitudeDistance = harvisineManhatam(deltaLatitude);
var longitudeDistance = harvisineManhatam(deltaLongitude);

return Math.abs(latitudeDistance) + Math.abs(longitudeDistance);
};
},
},
},
],
outputParameters: {
distance: "${calculate_distance_ref.output.result}",
},
});

If we run the test, it will fail because the result is not 12; i.e., in the workflow calculate_distance, the input parameter was defined as 12.

But in accordance with Red-Green-Refactor, we can calculate the distance using Taxicab geometry if we pick the two cardinal points. So this test should pass. So let’s decide the origin and destination points to be the same, so the result is 0. It is actually .toEqual(0) since it’s returning an object. So we can fix that in the test.

Note: Key takeaway from the above case: I’ve used ES5 javascript on my editor and not as a string. However, you can't use closures with the rest of the file’s code, and the returned function has to be written in ES5. Else the tests will fail.

Running the test now registers a new workflow overwriting the old one.

Finding Best Rider

Now that we have the calculate_distance workflow. We can think of this workflow as a function that can later be invoked into a different project/file.

And let's create workflow number two, i.e.,findNearByRiders, which will hit a microservice that pulls the registered riders list.

Hitting Microservice

We can use the HTTP task to hit something simple as an HTTP microservice. The HTTP task will take some input parameters and hit an endpoint with our configuration. It is similar to cURL or Postman. We will be using dummy json, which returns a list of users with an address. Preferably, consider this address as the last reported address from the riders.

export const nearByRiders = generate({
name: "findNearByRiders",
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
],
outputParameters: {
possibleRiders: "${get_users_ref.output.response.body.users}",
},
});

Our findNearByRiders workflow hits an endpoint and returns the list of all available riders.

Let's write the test.

describe("NearbyRiders", () => {
// As before, we create the workflow.
test("Creates a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

await expect(
workflowExecutor.registerWorkflow(true, nearByRiders)
).resolves.not.toThrowError();
});

test("Should return all users with latest reported address", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
//Let's wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus.status).toBe("COMPLETED");
expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0);
console.log("Riders", JSON.stringify(workflowStatus?.output, null, 2));
});
});

If we run our test, it should pass since the number of users is around 30. Looking at the printed output, you can see that the whole structure is being returned by the endpoint.

Our workflow is incomplete because it only returns the list of every possible rider. But we need to get the distance between the riders and the packages. For this, we must run our previous workflow calculate_distance for every rider on the fetched list. Let’s prepare the data to be passed to the next workflow. Here, we utilize the JQ Transform task, which runs a JQ query over the JSON data.

JSON_JQ_TRANSFORM Task

Let's add the JQ task.

export const nearByRiders = generate({
name: "findNearByRiders",
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "summarize",
inputParameters: {
users: "${get_users_ref.output.response.body.users}",
queryExpression:
".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})",
},
},
],
outputParameters: {
possibleRiders: "${get_users_ref.output.response.body.users}",
},
});

From the task definition here, you can see the mapping of JQ users to the variable output of the HTTP task and then extracting the address. The expected result should have the structure {identity:{id,email}, to:{latitude,longitude}, from:{latitude,longitude}}.

Dot Map method

At this point, we have an array with all possible riders and a workflow to calculate the distance between two points. We must aggregate these to calculate the distance between the package and the riders so that the nearby riders can be chosen. While aggregating in javascript or changing the data for every item in the array, we usually leverage the map method, which takes a function that will be applied to every item in the array.

Since we extracted the distance calculated, and need to map our riders through a “function”. Let’s create a dot map workflow for this. This workflow takes the array of riders as the input parameters and the workflow ID of the calculate_distance to run on each rider.

Note that this new workflow will work for every array and workflow ID provided and is not limited to the riders and the calculate_distance workflow.

describe("Mapper Test", () => {
test("Creates a workflow", async () => {
const client = await clientPromise;
await expect(
client.metadataResource.create(workflowDotMap, true)
).resolves.not.toThrowError();
});

test("Gets existing workflow", async () => {
const client = await clientPromise;
const wf = await client.metadataResource.get(workflowDotMap.name);
expect(wf.name).toEqual(workflowDotMap.name);
expect(wf.version).toEqual(workflowDotMap.version);
});

test("Can map over an array using a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

const from = {
latitude: -34.4810097,
longitude: -58.4972602,
};
const to = {
latitude: -34.494858,
longitude: -58.491168,
};

const executionId = await workflowExecutor.startWorkflow({
name: workflowDotMap.name,
version: 1,
input: {
inputArray: [{ from, to, identity: "js@js.com" }],
mapperWorkflowId: "calculate_distance",
},
});

await new Promise((r) => setTimeout(() => r(true), 1300));

const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus?.status).toBe("COMPLETED");
expect(workflowStatus?.output?.outputArray).toEqual(
expect.arrayContaining([
expect.objectContaining({
distance: 2.2172824347556963,
}),
])
);
});
});

Workflow

export const workflowDotMap = generate({
name: "workflowDotMap",
inputParameters: ["inputArray", "mapperWorkflowId"],
tasks: [
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "count",
taskReferenceName: "count_ref",
inputParameters: {
input: "${workflow.input.inputArray}",
queryExpression: ".[] | length",
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "dyn_task_builder",
taskReferenceName: "dyn_task_builder_ref",
inputParameters: {
input: {},
queryExpression:
'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasks[$f].subWorkflowParam.name = "${workflow.input.mapperWorkflowId}" | .dynamicTasks[$f].taskReferenceName = "mapperWorkflow_wf_ref_\\($f)" | .dynamicTasks[$f].type = "SUB_WORKFLOW")',
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "dyn_input_params_builder",
taskReferenceName: "dyn_input_params_builder_ref",
inputParameters: {
taskList: "${dyn_task_builder_ref.output.result}",
input: "${workflow.input.inputArray}",
queryExpression:
'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasksInput."mapperWorkflow_wf_ref_\\($f)" = .input[$f])',
},
},
{
type: TaskType.FORK_JOIN_DYNAMIC,
inputParameters: {
dynamicTasks: "${dyn_task_builder_ref.output.result.dynamicTasks}",
dynamicTasksInput:
"${dyn_input_params_builder_ref.output.result.dynamicTasksInput}",
},
},
{
type: TaskType.JOIN,
name: "join",
taskReferenceName: "join_ref",
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "to_array",
inputParameters: {
objValues: "${join_ref.output}",
queryExpression: ".objValues | to_entries | map(.value)",
},
},
],
outputParameters: {
outputArray: "${to_array_ref.output.result}",
},
});
  • From the above example workflow, we get the number of arrays.
  • At "dyn_task_builder", we create a SubWorkflow task template for every item within the array.
  • At "dyn_input_params_builder", we prepare the parameters to pass on to each SubWorkflow.
  • Using FORK_JOIN_DYNAMIC, we create each task using our previously created template and pass the corresponding parameters. After the join operation, use a JSON_JQ_TRANSFORM task to extract the results and return an array with the transformations.

Calculating distance between package and riders

Given that we now have the origin and destination points, let us modify the NearbyRiders workflow so that using the riders' last reported locations, we get the distance between the package and the riders. To achieve this, we pull the riders from the microservice, calculate the distance to the package and sort them by the distance from the package.

describe("NearbyRiders", () => {
// As before, we create the workflow.
test("Creates a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

await expect(
workflowExecutor.registerWorkflow(true, nearByRiders)
).resolves.not.toThrowError();
});

// First, let's test that the API responds to all the users.
test("Should return all users with latest reported address", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
// Let’s wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus.status).toBe("COMPLETED");
expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0);
});

// So now we need to specify input parameters, else we won't know the distance to the package
test("User object should contain distance to package", async () => {
const client = await clientPromise;

const workflowExecutor = new WorkflowExecutor(client);

const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
// Let’s wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));

const nearbyRidersWfResult =
await client.workflowResource.getExecutionStatus(executionId, true);

expect(nearbyRidersWfResult.status).toBe("COMPLETED");
nearbyRidersWfResult.output?.possibleRiders.forEach((re: any) => {
expect(re).toHaveProperty("distance");
expect(re).toHaveProperty("rider");
});
});
});

Workflow

export const nearByRiders = generate({
name: "findNearByRiders",
inputParameters: ["place"],
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "summarize",
inputParameters: {
users: "${get_users_ref.output.response.body.users}",
queryExpression:
".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})",
},
},
{
type: TaskType.SUB_WORKFLOW,
name: "distance_to_riders",
subWorkflowParam: {
name: "workflowDotMap",
version: 1,
},
inputParameters: {
inputArray: "${summarize_ref.output.result}",
mapperWorkflowId: "calculate_distance",
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "riders_picker",
taskReferenceName: "riders_picker_ref",
inputParameters: {
ridersWithDistance: "${distance_to_riders_ref.output.outputArray}",
queryExpression:
".ridersWithDistance | map( {distance:.distance, rider:.identity}) | sort_by(.distance) ",
},
},
],
outputParameters: {
possibleRiders: "${riders_picker_ref.output.result}",
},
});

This will give us a list of riders with their distance to the package, sorted by distance from the package.

Picking a Rider

Now we have all the required data, such as package origin/destination, riders, and their distance from the package.

Next, we’ll pre-select N riders, notify them of the possible ride, and ensure that a rider picks the ride. And for this last part, we will create a worker who will randomly select one.

export const createRiderRaceDefintion = (client: ConductorClient) =>
client.metadataResource.registerTaskDef([
{
name: "rider_race",
description: "Rider race",
retryCount: 3,
timeoutSeconds: 3600,
timeoutPolicy: "TIME_OUT_WF",
retryLogic: "FIXED",
retryDelaySeconds: 60,
responseTimeoutSeconds: 600,
rateLimitPerFrequency: 0,
rateLimitFrequencyInSeconds: 1,
ownerEmail: "youremail@example.com",
pollTimeoutSeconds: 3600,
},
]);

export const pickRider = generate({
name: "pickRider",
inputParameters: ["targetRiders", "maxCompetingRiders"],
tasks: [
{
name: "do_while",
taskReferenceName: "do_while_ref",
type: TaskType.DO_WHILE,
inputParameters: {
amountOfCompetingRiders: "${workflow.input.maxCompetingRiders}",
riders: "${workflow.input.targetRiders}",
},
loopCondition: "$.do_while_ref['iteration'] < $.amountOfCompetingRiders",
loopOver: [
{
taskReferenceName: "assigner_ref",
type: TaskType.INLINE,
inputParameters: {
riders: "${workflow.input.targetRiders}",
currentIteration: "${do_while_ref.output.iteration}",
expression: ($: {
riders: {
distance: number;
rider: { id: number; email: string };
}[];
currentIteration: number;
}) =>
function () {
var currentRider = $.riders[$.currentIteration - 1];
return {
distance: currentRider.distance,
riderId: currentRider.rider.id,
riderEmail: currentRider.rider.email,
};
},
},
},
{
type: TaskType.HTTP,
name: "notify_riders_of_ride",
taskReferenceName: "notify_riders_of_ride",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/posts/add",
method: "POST",
body: {
title:
"Are you available to take a ride of a distance of ${assigner_ref.output.result.distance} km from you",
userId: "${assigner_ref.output.result.riderId}",
},
},
},
},
],
},
{
type: TaskType.SIMPLE,
name: "rider_race",
inputParameters: {
riders: "${workflow.input.targetRiders}",
},
},
],
outputParameters: {
selectedRider: "${rider_race_ref.output.selectedRider}",
},
});

To select the rider and notify them, we utilize the DO_WHILE task. By simulation, we let the riders know that there is a ride they will be interested in. The notifying order would be from the nearest package to the less near one. Finally, we simulate with a simple task that a rider has accepted our ride.

For this, we need to register the task initially. By doing this, we let the Conductor know that a worker will be doing the simple tasks. The actual worker needs to be set up for the scheduled tasks to be executed. Otherwise, while running the above workflow, it will be in a SCHEDULED state and wait for the worker to finish the task, which will never get picked up by a worker.

Setting up Worker

To implement the worker, we need to create an object of type RunnerArgs. The worker takes a taskDefName which should match our SIMPLE task's reference name. You may have multiple workers waiting for the same task. However, the first one to poll for work with the task name gets the job done.

export const riderRespondWorkerRunner = (client: ConductorClient) => {
const firstRidertoRespondWorker: RunnerArgs = {
taskResource: client.taskResource,
worker: {
taskDefName: "rider_race",
execute: async ({ inputData }) => {
const riders = inputData?.riders;
const [aRider] = riders.sort(() => 0.5 - Math.random());
return {
outputData: {
selectedRider: aRider.rider,
},
status: "COMPLETED",
};
},
},
options: {
pollInterval: 10,
domain: undefined,
concurrency: 1,
workerID: "",
},
};
const taskManager = new TaskRunner(firstRidertoRespondWorker);
return taskManager;
};

Workflow

// Having the nearby riders, we want to filter out those willing to take the ride.
// For this, we will simulate a POST where we ask the rider if he is willing to take the ride
describe("PickRider", () => {
test("Creates a workflow", async () => {
const client = await clientPromise;

await expect(
client.metadataResource.create(pickRider, true)
).resolves.not.toThrowError();
});
test("Every iteration should have the current driver", async () => {
const client = await clientPromise;
await createRiderRaceDefintion(client);

const runner = riderRespondWorkerRunner(client);
runner.startPolling();

// Our ‘N’ pre-selected riders
const maxCompetingRiders = 5;
const targetRiders = [
{
distance: 12441.284548668005,
rider: {
id: 15,
email: "kminchelle@qq.com",
},
},
{
distance: 16211.662539905119,
rider: {
id: 8,
email: "ggude7@chron.com",
},
},
{
distance: 17435.548525470404,
rider: {
id: 29,
email: "jissetts@hostgator.com",
},
},
{
distance: 17602.325904122146,
rider: {
id: 20,
email: "aeatockj@psu.edu",
},
},
{
distance: 17823.508069312982,
rider: {
id: 3,
email: "rshawe2@51.la",
},
},
{
distance: 17824.39318092907,
rider: {
id: 7,
email: "dpettegre6@columbia.edu",
},
},
{
distance: 23472.94011516013,
rider: {
id: 26,
email: "lgronaverp@cornell.edu",
},
},
];

const workflowExecutor = new WorkflowExecutor(client);

const executionId = await workflowExecutor.startWorkflow({
name: pickRider.name,
input: {
maxCompetingRiders,
targetRiders,
},
version: 1,
});

await new Promise((r) => setTimeout(() => r(true), 2500));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);

expect(workflowStatus.status).toEqual("COMPLETED");

// We check our task and select the number of riders we are after.
const doWhileTaskResult = workflowStatus?.tasks?.find(
({ taskType }) => taskType === TaskType.DO_WHILE
);
expect(doWhileTaskResult?.outputData?.iteration).toBe(maxCompetingRiders);
expect(workflowStatus?.output?.selectedRider).toBeTruthy();

runner.stopPolling();
});
});

Baking Delivery App - Combining blocks

Finally, we have all our ingredients ready. Now, let’s bake our delivery app together.

In a nutshell, when we have a client with a package request with the origin and destination points, we need to pick the best rider to deliver the package from the origin to the destination. As a bonus, let’s compute the delivery cost and make it less expensive if our client is paying by card instead of cash.

So, we run the nearbyRiders workflow passing the origin as an input parameter. This would give a list of possible riders, of which one would be picked based on “who answers first”. Next, we calculate the distance from the origin to the destination to compute the cost. Therefore, the workflow delivers the output with the selected rider and the shipping cost.

Workflow

export const deliveryWorkflow = generate({
name: "deliveryWorkflow",
inputParameters: ["origin", "packageDestination", "client", "paymentMethod"],
tasks: [
{
taskReferenceName: "possible_riders_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: nearByRiders.version,
name: nearByRiders.name,
},
inputParameters: {
place: "${workflow.input.origin}",
},
},
{
taskReferenceName: "pick_a_rider_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: pickRider.version,
name: pickRider.name,
},
inputParameters: {
targetRiders: "${possible_riders_ref.output.possibleRiders}",
maxCompetingRiders: 5,
},
},
{
taskReferenceName: "calculate_package_distance_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: calculateDistanceWF.version,
name: calculateDistanceWF.name,
},
inputParameters: {
from: "${workflow.input.origin}",
to: "${workflow.input.packageDestination}",
identity: "commonPackage",
},
},
{
type: TaskType.SWITCH,
name: "compute_total_cost",
evaluatorType: "value-param",
inputParameters: {
value: "${workflow.input.paymentMethod}",
},
expression: "value",
decisionCases: {
card: [
{
type: TaskType.INLINE,
taskReferenceName: "card_price_ref",
inputParameters: {
distance: "${calculate_package_distance_ref.output.distance}",
expression: ($: { distance: number }) =>
function () {
return $.distance * 20 + 20;
},
},
},
{
type: TaskType.SET_VARIABLE,
inputParameters: {
totalPrice: "${card_price_ref.output.result}",
},
},
],
},
defaultCase: [
{
type: TaskType.INLINE,
taskReferenceName: "non_card_price_ref",
inputParameters: {
distance: "${calculate_package_distance_ref.output.distance}",
expression: ($: { distance: number }) =>
function () {
return $.distance * 40 + 20;
},
},
},
{
type: TaskType.SET_VARIABLE,
inputParameters: {
totalPrice: "${non_card_price_ref.output.result}",
},
},
],
},
],
outputParameters: {
rider: "${pick_a_rider_ref.output.selectedRider}",
totalPrice: "${workflow.variables.totalPrice}",
},
});

Wrapping Up

And our app is finally ready. Building an app this way resembles the same process as app building via coding. But here, we put together small building blocks to make a giant workflow.

Following this article along with Orkes Playground, you can seamlessly visualize the building blocks. You can make further improvements to the application by focusing on that particular block without losing the application's perspective as a whole. [ You can test out Conductor for free in Orkes Playground, or if you’re looking for a cloud version, you may have a sneak peek at Orkes Cloud.