Skip to main content

· 20 min read
Riza Farheen
James Stuart

What do you do when you’re hungry and there is no way to cook food? Definitely, you rely on a food delivery application. Have you ever wondered how this delivery process works? Well, let me walk you through the process of how Conductor helps in orchestrating the delivery process.

In this article, you will learn how to build a delivery workflow using Conductor, an open-source microservice and workflow orchestration framework. Conductor handles the process as a workflow that divides the delivery process into individual blocks. Let’s see this in action now!

Delivery Workflow

Consider that we get a request in the delivery app to send a package from an origin to a destination. The application has the details of both the registered clients and riders. It should connect the best-fitting rider to deliver the package. So, the application gets the registered riders list, picks the nearest riders, and lets them compete to win the ride.

Looks simple? Yeah! Conductor comes into action here! You can simply build your delivery application by connecting small blocks together using Conductor.

What you need!

  • A list of registered riders.
  • A way to let our riders know they have a possible delivery.
  • A method for our riders to compete or be the first to select the ride.

Building the application

Let’s begin to bake our delivery app. Initially, we need to get some API calls for processes such as getting the riders list, notifying the riders, etc. We will make use of the dummy JSON that provides us with fake APIs.

So, in this case, we will use the user API for pulling our registered riders, and for notifying the rider about a possible ride, we will use the posts API.

Since we are creating this workflow as code, instead of using the workflow diagram, let's try to start with the test and build our workflow app from scratch. For the demonstration purpose, we will be using Orkes Playground, a free Conductor platform. However, the process would be the same for Netflix Conductor.

Workflow as Code

Project Setup

First, you need to set up a project:

  1. Create an npm project with npm init and install the SDK with npm i @io-orkes/conductor-javascript.
  2. You'll need to add jest and typescript support. For this, copy and paste the jest.config.js and tsconfig.json files into your project in the root folder. Then add the following devDependencies as a separate JSON file:
"scripts": {
"test": "jest"
},
"devDependencies": {
"@tsconfig/node16": "^1.0.2",
"@types/jest": "^29.0.3",
"@types/node": "^17.0.30",
"@types/node-fetch": "^2.6.1",
"@typescript-eslint/eslint-plugin": "^5.23.0",
"@typescript-eslint/parser": "^5.23.0",
"eslint": "^6.1.0",
"jest": "^28.1.0",
"ts-jest": "^28.0.1",
"ts-node": "^10.7.0",
"typescript": "^4.6.4"
},
  1. Run yarn to fetch them.

So, now you’ve created your project. As we are creating the workflow as code, next, let's create two files; mydelivery.ts and mydelivery.test.ts. By writing our code along with the test, you will get instant feedback and know exactly what happens with every step.

Creating Our Workflow

Let’s begin creating our workflow. Initially, we need to calculate the distance between the two points, i.e., the rider and the package to be delivered. We leverage this distance to calculate the shipment cost too. So let's create a workflow that can be reused in both situations.

Let the first workflow be calculate_distance that outputs the result of some function. So in our mydelivery.ts, let's update the following code:

import {
generate,
TaskType,
OrkesApiConfig,
} from "@io-orkes/conductor-javascript";

export const playConfig: Partial<OrkesApiConfig> = {
keyId: "your_key_id",
keySecret: "your_key_secret",
serverUrl: "https://play.orkes.io/api",
};
export const calculateDistanceWF = generate({
name: "calculate_distance",
inputParameters: ["origin", "destination"],
tasks: [
{
type: TaskType.INLINE,
name: "calculate_distance",
inputParameters: {
expression: "12",
},
},
],
outputParameters: {
distance: "${calculate_distance_ref.output.result}",
identity: "${workflow.input.identity}", // Some identifier for the call will make sense later on
},
});

Now in our test file, create a test that generates the workflow so we can look at it later on the Playground.

import {
orkesConductorClient,
WorkflowExecutor,
} from "@io-orkes/conductor-javascript";
import { calculateDistanceWF, playConfig } from "./mydelivery";

describe("My Delivery Test", () => {
const clientPromise = orkesConductorClient(playConfig);
describe("Calculate distance workflow", () => {
test("Creates a workflow", async () => {
// const client = new ConductorClient(); // If you are using Netflix conductor
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
await expect(
workflowExecutor.registerWorkflow(true, calculateDistanceWF)
).resolves.not.toThrowError();
console.log(JSON.stringify(calculateDistanceWF, null, 2));
});
});
});

Now, run npm test.

We have just created our first workflow, which basically prints the output of its task. If you look at the generated JSON, you'll notice that there are some additional attributes apart from the ones we’ve given as inputs. That's because the generate function will generate default values, which you can overwrite later. You'll also notice that I’ve called this "${calculate_distance_ref.output.distance}" using the generated task reference name. If you don't specify a taskReferenceName, it will generate one by adding _ref to the specified name. To reference a task output or a given task, we always use the taskReferenceName. Another thing to notice is the true value passed as the first argument of the registerWorkflow function. This flag specifies that the workflow will be overwritten, which is required since we will run our tests repeatedly.

Let's create a test to actually run the workflow now. You can add the origin and destination parameters previously known by the workflow definition (Workflow input parameters). We are not using it for now, but it is relevant in the further steps.

test("Should calculate distance", async () => {
// Pick two random points
const origin = {
latitude: -34.4810097,
longitude: -58.4972602,
};

const destination = {
latitude: -34.4810097,
longitude: -58.491168,
};
// const client = new ConductorClient(); // If you are using Netflix conductor
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
// Run the workflow passing an origin and a destination
const executionId = await workflowExecutor.startWorkflow({
name: calculateDistanceWF.name,
version: 1,
input: {
origin,
destination,
},
});
const workflowStatus = await workflowExecutor.getWorkflow(executionId, true);

expect(workflowStatus?.status).toEqual("COMPLETED");
// For now we expect the workflow output to be our hardcoded value
expect(workflowStatus?.output?.distance).toBe(12);
});

Now, run yarn test, and great, we have our first workflow execution run!

Calculating Actual Distance

Next, we need to calculate the actual or approximate distance between the two points. To get the distance between two points in a sphere, we could use the Haversine formula, but since we don't want a direct distance (because our riders can't fly :P), we will implement something like Taxicab geometry.

Calculating distance using an INLINE Task

An INLINE task can be utilized in situations where the code is required to be simple. The INLINE task can take input parameters and an expression. If we go back to our calculate_distance workflow, it takes no context and returns a hard-coded object. Now, let’s modify our inline task to take the origin and destination to calculate the approximate distance.

export const calculateDistanceWF = generate({
name: "calculate_distance",
inputParameters: ["origin", "destination"],
tasks: [
{
name: "calculate_distance",
type: TaskType.INLINE,
inputParameters: {
fromLatitude: "${workflow.input.from.latitude}",
fromLongitude: "${workflow.input.from.longitude}",
toLatitude: "${workflow.input.to.latitude}",
toLongitude: "${workflow.input.to.longitude}",
expression: function ($: any) {
return function () {
/**
* Converts from degrees to Radians
*/
function degreesToRadians(degrees: any) {
return (degrees * Math.PI) / 180;
}
/**
*
* Returns total latitude/longitude distance
*
*/
function harvisineManhatam(elem: any) {
var EARTH_RADIUS = 6371;
var a = Math.pow(Math.sin(elem / 2), 2); // sin^2(delta/2)
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); // 2* atan2(sqrt(a),sqrt(1-a))
return EARTH_RADIUS * c;
}

var deltaLatitude = Math.abs(
degreesToRadians($.fromLatitude) - degreesToRadians($.toLatitude)
);
var deltaLongitude = Math.abs(
degreesToRadians($.fromLongitude) -
degreesToRadians($.toLongitude)
);

var latitudeDistance = harvisineManhatam(deltaLatitude);
var longitudeDistance = harvisineManhatam(deltaLongitude);

return Math.abs(latitudeDistance) + Math.abs(longitudeDistance);
};
},
},
},
],
outputParameters: {
distance: "${calculate_distance_ref.output.result}",
},
});

If we run the test, it will fail because the result is not 12; i.e., in the workflow calculate_distance, the input parameter was defined as 12.

But in accordance with Red-Green-Refactor, we can calculate the distance using Taxicab geometry if we pick the two cardinal points. So this test should pass. So let’s decide the origin and destination points to be the same, so the result is 0. It is actually .toEqual(0) since it’s returning an object. So we can fix that in the test.

Note: Key takeaway from the above case: I’ve used ES5 javascript on my editor and not as a string. However, you can't use closures with the rest of the file’s code, and the returned function has to be written in ES5. Else the tests will fail.

Running the test now registers a new workflow overwriting the old one.

Finding Best Rider

Now that we have the calculate_distance workflow. We can think of this workflow as a function that can later be invoked into a different project/file.

And let's create workflow number two, i.e.,findNearByRiders, which will hit a microservice that pulls the registered riders list.

Hitting Microservice

We can use the HTTP task to hit something simple as an HTTP microservice. The HTTP task will take some input parameters and hit an endpoint with our configuration. It is similar to cURL or Postman. We will be using dummy json, which returns a list of users with an address. Preferably, consider this address as the last reported address from the riders.

export const nearByRiders = generate({
name: "findNearByRiders",
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
],
outputParameters: {
possibleRiders: "${get_users_ref.output.response.body.users}",
},
});

Our findNearByRiders workflow hits an endpoint and returns the list of all available riders.

Let's write the test.

describe("NearbyRiders", () => {
// As before, we create the workflow.
test("Creates a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

await expect(
workflowExecutor.registerWorkflow(true, nearByRiders)
).resolves.not.toThrowError();
});

test("Should return all users with latest reported address", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
//Let's wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus.status).toBe("COMPLETED");
expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0);
console.log("Riders", JSON.stringify(workflowStatus?.output, null, 2));
});
});

If we run our test, it should pass since the number of users is around 30. Looking at the printed output, you can see that the whole structure is being returned by the endpoint.

Our workflow is incomplete because it only returns the list of every possible rider. But we need to get the distance between the riders and the packages. For this, we must run our previous workflow calculate_distance for every rider on the fetched list. Let’s prepare the data to be passed to the next workflow. Here, we utilize the JQ Transform task, which runs a JQ query over the JSON data.

JSON_JQ_TRANSFORM Task

Let's add the JQ task.

export const nearByRiders = generate({
name: "findNearByRiders",
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "summarize",
inputParameters: {
users: "${get_users_ref.output.response.body.users}",
queryExpression:
".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})",
},
},
],
outputParameters: {
possibleRiders: "${get_users_ref.output.response.body.users}",
},
});

From the task definition here, you can see the mapping of JQ users to the variable output of the HTTP task and then extracting the address. The expected result should have the structure {identity:{id,email}, to:{latitude,longitude}, from:{latitude,longitude}}.

Dot Map method

At this point, we have an array with all possible riders and a workflow to calculate the distance between two points. We must aggregate these to calculate the distance between the package and the riders so that the nearby riders can be chosen. While aggregating in javascript or changing the data for every item in the array, we usually leverage the map method, which takes a function that will be applied to every item in the array.

Since we extracted the distance calculated, and need to map our riders through a “function”. Let’s create a dot map workflow for this. This workflow takes the array of riders as the input parameters and the workflow ID of the calculate_distance to run on each rider.

Note that this new workflow will work for every array and workflow ID provided and is not limited to the riders and the calculate_distance workflow.

describe("Mapper Test", () => {
test("Creates a workflow", async () => {
const client = await clientPromise;
await expect(
client.metadataResource.create(workflowDotMap, true)
).resolves.not.toThrowError();
});

test("Gets existing workflow", async () => {
const client = await clientPromise;
const wf = await client.metadataResource.get(workflowDotMap.name);
expect(wf.name).toEqual(workflowDotMap.name);
expect(wf.version).toEqual(workflowDotMap.version);
});

test("Can map over an array using a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

const from = {
latitude: -34.4810097,
longitude: -58.4972602,
};
const to = {
latitude: -34.494858,
longitude: -58.491168,
};

const executionId = await workflowExecutor.startWorkflow({
name: workflowDotMap.name,
version: 1,
input: {
inputArray: [{ from, to, identity: "js@js.com" }],
mapperWorkflowId: "calculate_distance",
},
});

await new Promise((r) => setTimeout(() => r(true), 1300));

const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus?.status).toBe("COMPLETED");
expect(workflowStatus?.output?.outputArray).toEqual(
expect.arrayContaining([
expect.objectContaining({
distance: 2.2172824347556963,
}),
])
);
});
});

Workflow

export const workflowDotMap = generate({
name: "workflowDotMap",
inputParameters: ["inputArray", "mapperWorkflowId"],
tasks: [
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "count",
taskReferenceName: "count_ref",
inputParameters: {
input: "${workflow.input.inputArray}",
queryExpression: ".[] | length",
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "dyn_task_builder",
taskReferenceName: "dyn_task_builder_ref",
inputParameters: {
input: {},
queryExpression:
'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasks[$f].subWorkflowParam.name = "${workflow.input.mapperWorkflowId}" | .dynamicTasks[$f].taskReferenceName = "mapperWorkflow_wf_ref_\\($f)" | .dynamicTasks[$f].type = "SUB_WORKFLOW")',
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "dyn_input_params_builder",
taskReferenceName: "dyn_input_params_builder_ref",
inputParameters: {
taskList: "${dyn_task_builder_ref.output.result}",
input: "${workflow.input.inputArray}",
queryExpression:
'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasksInput."mapperWorkflow_wf_ref_\\($f)" = .input[$f])',
},
},
{
type: TaskType.FORK_JOIN_DYNAMIC,
inputParameters: {
dynamicTasks: "${dyn_task_builder_ref.output.result.dynamicTasks}",
dynamicTasksInput:
"${dyn_input_params_builder_ref.output.result.dynamicTasksInput}",
},
},
{
type: TaskType.JOIN,
name: "join",
taskReferenceName: "join_ref",
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "to_array",
inputParameters: {
objValues: "${join_ref.output}",
queryExpression: ".objValues | to_entries | map(.value)",
},
},
],
outputParameters: {
outputArray: "${to_array_ref.output.result}",
},
});
  • From the above example workflow, we get the number of arrays.
  • At "dyn_task_builder", we create a SubWorkflow task template for every item within the array.
  • At "dyn_input_params_builder", we prepare the parameters to pass on to each SubWorkflow.
  • Using FORK_JOIN_DYNAMIC, we create each task using our previously created template and pass the corresponding parameters. After the join operation, use a JSON_JQ_TRANSFORM task to extract the results and return an array with the transformations.

Calculating distance between package and riders

Given that we now have the origin and destination points, let us modify the NearbyRiders workflow so that using the riders' last reported locations, we get the distance between the package and the riders. To achieve this, we pull the riders from the microservice, calculate the distance to the package and sort them by the distance from the package.

describe("NearbyRiders", () => {
// As before, we create the workflow.
test("Creates a workflow", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);

await expect(
workflowExecutor.registerWorkflow(true, nearByRiders)
).resolves.not.toThrowError();
});

// First, let's test that the API responds to all the users.
test("Should return all users with latest reported address", async () => {
const client = await clientPromise;
const workflowExecutor = new WorkflowExecutor(client);
const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
// Let’s wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);
expect(workflowStatus.status).toBe("COMPLETED");
expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0);
});

// So now we need to specify input parameters, else we won't know the distance to the package
test("User object should contain distance to package", async () => {
const client = await clientPromise;

const workflowExecutor = new WorkflowExecutor(client);

const executionId = await workflowExecutor.startWorkflow({
name: nearByRiders.name,
input: {
place: {
latitude: -34.4810097,
longitude: -58.4972602,
},
},
version: 1,
});
// Let’s wait for the response...
await new Promise((r) => setTimeout(() => r(true), 2000));

const nearbyRidersWfResult =
await client.workflowResource.getExecutionStatus(executionId, true);

expect(nearbyRidersWfResult.status).toBe("COMPLETED");
nearbyRidersWfResult.output?.possibleRiders.forEach((re: any) => {
expect(re).toHaveProperty("distance");
expect(re).toHaveProperty("rider");
});
});
});

Workflow

export const nearByRiders = generate({
name: "findNearByRiders",
inputParameters: ["place"],
tasks: [
{
type: TaskType.HTTP,
name: "get_users",
taskReferenceName: "get_users_ref",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/users",
method: "GET",
},
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "summarize",
inputParameters: {
users: "${get_users_ref.output.response.body.users}",
queryExpression:
".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})",
},
},
{
type: TaskType.SUB_WORKFLOW,
name: "distance_to_riders",
subWorkflowParam: {
name: "workflowDotMap",
version: 1,
},
inputParameters: {
inputArray: "${summarize_ref.output.result}",
mapperWorkflowId: "calculate_distance",
},
},
{
type: TaskType.JSON_JQ_TRANSFORM,
name: "riders_picker",
taskReferenceName: "riders_picker_ref",
inputParameters: {
ridersWithDistance: "${distance_to_riders_ref.output.outputArray}",
queryExpression:
".ridersWithDistance | map( {distance:.distance, rider:.identity}) | sort_by(.distance) ",
},
},
],
outputParameters: {
possibleRiders: "${riders_picker_ref.output.result}",
},
});

This will give us a list of riders with their distance to the package, sorted by distance from the package.

Picking a Rider

Now we have all the required data, such as package origin/destination, riders, and their distance from the package.

Next, we’ll pre-select N riders, notify them of the possible ride, and ensure that a rider picks the ride. And for this last part, we will create a worker who will randomly select one.

export const createRiderRaceDefintion = (client: ConductorClient) =>
client.metadataResource.registerTaskDef([
{
name: "rider_race",
description: "Rider race",
retryCount: 3,
timeoutSeconds: 3600,
timeoutPolicy: "TIME_OUT_WF",
retryLogic: "FIXED",
retryDelaySeconds: 60,
responseTimeoutSeconds: 600,
rateLimitPerFrequency: 0,
rateLimitFrequencyInSeconds: 1,
ownerEmail: "youremail@example.com",
pollTimeoutSeconds: 3600,
},
]);

export const pickRider = generate({
name: "pickRider",
inputParameters: ["targetRiders", "maxCompetingRiders"],
tasks: [
{
name: "do_while",
taskReferenceName: "do_while_ref",
type: TaskType.DO_WHILE,
inputParameters: {
amountOfCompetingRiders: "${workflow.input.maxCompetingRiders}",
riders: "${workflow.input.targetRiders}",
},
loopCondition: "$.do_while_ref['iteration'] < $.amountOfCompetingRiders",
loopOver: [
{
taskReferenceName: "assigner_ref",
type: TaskType.INLINE,
inputParameters: {
riders: "${workflow.input.targetRiders}",
currentIteration: "${do_while_ref.output.iteration}",
expression: ($: {
riders: {
distance: number;
rider: { id: number; email: string };
}[];
currentIteration: number;
}) =>
function () {
var currentRider = $.riders[$.currentIteration - 1];
return {
distance: currentRider.distance,
riderId: currentRider.rider.id,
riderEmail: currentRider.rider.email,
};
},
},
},
{
type: TaskType.HTTP,
name: "notify_riders_of_ride",
taskReferenceName: "notify_riders_of_ride",
inputParameters: {
http_request: {
uri: "http://dummyjson.com/posts/add",
method: "POST",
body: {
title:
"Are you available to take a ride of a distance of ${assigner_ref.output.result.distance} km from you",
userId: "${assigner_ref.output.result.riderId}",
},
},
},
},
],
},
{
type: TaskType.SIMPLE,
name: "rider_race",
inputParameters: {
riders: "${workflow.input.targetRiders}",
},
},
],
outputParameters: {
selectedRider: "${rider_race_ref.output.selectedRider}",
},
});

To select the rider and notify them, we utilize the DO_WHILE task. By simulation, we let the riders know that there is a ride they will be interested in. The notifying order would be from the nearest package to the less near one. Finally, we simulate with a simple task that a rider has accepted our ride.

For this, we need to register the task initially. By doing this, we let the Conductor know that a worker will be doing the simple tasks. The actual worker needs to be set up for the scheduled tasks to be executed. Otherwise, while running the above workflow, it will be in a SCHEDULED state and wait for the worker to finish the task, which will never get picked up by a worker.

Setting up Worker

To implement the worker, we need to create an object of type RunnerArgs. The worker takes a taskDefName which should match our SIMPLE task's reference name. You may have multiple workers waiting for the same task. However, the first one to poll for work with the task name gets the job done.

export const riderRespondWorkerRunner = (client: ConductorClient) => {
const firstRidertoRespondWorker: RunnerArgs = {
taskResource: client.taskResource,
worker: {
taskDefName: "rider_race",
execute: async ({ inputData }) => {
const riders = inputData?.riders;
const [aRider] = riders.sort(() => 0.5 - Math.random());
return {
outputData: {
selectedRider: aRider.rider,
},
status: "COMPLETED",
};
},
},
options: {
pollInterval: 10,
domain: undefined,
concurrency: 1,
workerID: "",
},
};
const taskManager = new TaskRunner(firstRidertoRespondWorker);
return taskManager;
};

Workflow

// Having the nearby riders, we want to filter out those willing to take the ride.
// For this, we will simulate a POST where we ask the rider if he is willing to take the ride
describe("PickRider", () => {
test("Creates a workflow", async () => {
const client = await clientPromise;

await expect(
client.metadataResource.create(pickRider, true)
).resolves.not.toThrowError();
});
test("Every iteration should have the current driver", async () => {
const client = await clientPromise;
await createRiderRaceDefintion(client);

const runner = riderRespondWorkerRunner(client);
runner.startPolling();

// Our ‘N’ pre-selected riders
const maxCompetingRiders = 5;
const targetRiders = [
{
distance: 12441.284548668005,
rider: {
id: 15,
email: "kminchelle@qq.com",
},
},
{
distance: 16211.662539905119,
rider: {
id: 8,
email: "ggude7@chron.com",
},
},
{
distance: 17435.548525470404,
rider: {
id: 29,
email: "jissetts@hostgator.com",
},
},
{
distance: 17602.325904122146,
rider: {
id: 20,
email: "aeatockj@psu.edu",
},
},
{
distance: 17823.508069312982,
rider: {
id: 3,
email: "rshawe2@51.la",
},
},
{
distance: 17824.39318092907,
rider: {
id: 7,
email: "dpettegre6@columbia.edu",
},
},
{
distance: 23472.94011516013,
rider: {
id: 26,
email: "lgronaverp@cornell.edu",
},
},
];

const workflowExecutor = new WorkflowExecutor(client);

const executionId = await workflowExecutor.startWorkflow({
name: pickRider.name,
input: {
maxCompetingRiders,
targetRiders,
},
version: 1,
});

await new Promise((r) => setTimeout(() => r(true), 2500));
const workflowStatus = await client.workflowResource.getExecutionStatus(
executionId,
true
);

expect(workflowStatus.status).toEqual("COMPLETED");

// We check our task and select the number of riders we are after.
const doWhileTaskResult = workflowStatus?.tasks?.find(
({ taskType }) => taskType === TaskType.DO_WHILE
);
expect(doWhileTaskResult?.outputData?.iteration).toBe(maxCompetingRiders);
expect(workflowStatus?.output?.selectedRider).toBeTruthy();

runner.stopPolling();
});
});

Baking Delivery App - Combining blocks

Finally, we have all our ingredients ready. Now, let’s bake our delivery app together.

In a nutshell, when we have a client with a package request with the origin and destination points, we need to pick the best rider to deliver the package from the origin to the destination. As a bonus, let’s compute the delivery cost and make it less expensive if our client is paying by card instead of cash.

So, we run the nearbyRiders workflow passing the origin as an input parameter. This would give a list of possible riders, of which one would be picked based on “who answers first”. Next, we calculate the distance from the origin to the destination to compute the cost. Therefore, the workflow delivers the output with the selected rider and the shipping cost.

Workflow

export const deliveryWorkflow = generate({
name: "deliveryWorkflow",
inputParameters: ["origin", "packageDestination", "client", "paymentMethod"],
tasks: [
{
taskReferenceName: "possible_riders_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: nearByRiders.version,
name: nearByRiders.name,
},
inputParameters: {
place: "${workflow.input.origin}",
},
},
{
taskReferenceName: "pick_a_rider_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: pickRider.version,
name: pickRider.name,
},
inputParameters: {
targetRiders: "${possible_riders_ref.output.possibleRiders}",
maxCompetingRiders: 5,
},
},
{
taskReferenceName: "calculate_package_distance_ref",
type: TaskType.SUB_WORKFLOW,
subWorkflowParam: {
version: calculateDistanceWF.version,
name: calculateDistanceWF.name,
},
inputParameters: {
from: "${workflow.input.origin}",
to: "${workflow.input.packageDestination}",
identity: "commonPackage",
},
},
{
type: TaskType.SWITCH,
name: "compute_total_cost",
evaluatorType: "value-param",
inputParameters: {
value: "${workflow.input.paymentMethod}",
},
expression: "value",
decisionCases: {
card: [
{
type: TaskType.INLINE,
taskReferenceName: "card_price_ref",
inputParameters: {
distance: "${calculate_package_distance_ref.output.distance}",
expression: ($: { distance: number }) =>
function () {
return $.distance * 20 + 20;
},
},
},
{
type: TaskType.SET_VARIABLE,
inputParameters: {
totalPrice: "${card_price_ref.output.result}",
},
},
],
},
defaultCase: [
{
type: TaskType.INLINE,
taskReferenceName: "non_card_price_ref",
inputParameters: {
distance: "${calculate_package_distance_ref.output.distance}",
expression: ($: { distance: number }) =>
function () {
return $.distance * 40 + 20;
},
},
},
{
type: TaskType.SET_VARIABLE,
inputParameters: {
totalPrice: "${non_card_price_ref.output.result}",
},
},
],
},
],
outputParameters: {
rider: "${pick_a_rider_ref.output.selectedRider}",
totalPrice: "${workflow.variables.totalPrice}",
},
});

Wrapping Up

And our app is finally ready. Building an app this way resembles the same process as app building via coding. But here, we put together small building blocks to make a giant workflow.

Following this article along with Orkes Playground, you can seamlessly visualize the building blocks. You can make further improvements to the application by focusing on that particular block without losing the application's perspective as a whole. [ You can test out Conductor for free in Orkes Playground, or if you’re looking for a cloud version, you may have a sneak peek at Orkes Cloud.

· 4 min read
Doug Sillars

We are really excited to announce the latest feature to Orkes' cloud hosted version of Netflix Conductor. It is now no longer a secret - we support the use of secrets in your workflow definitions! Now you can be certain that your secret keys, tokens and values that you use in your workflows are secure!

· 8 min read
Johannes Koch

Know Your Customer (KYC) workflows are really important for banks and financial services as well as other industries. In the banking industry in most countries, having a KYC workflow is enforced by the regulators that provide the banking license—the banks are required to implement a KYC workflow and a risk-based approach to fight money laundering.

In this article, you will learn about KYC use cases and workflows, including their requirements and distinguishing features. You will also learn about using Conductor, an open source microservice and workflow orchestration framework, and the Orkes Conductor Playground (as a Software as a Service option to host Conductor workflow) to build and test your own KYC workflow within minutes! You will build an example workflow in Conductor that you can easily run on the Orkes Conductor Cloud.

· 10 min read
Doug Sillars

The idea of reduce, reuse and recycle is reverberated around the world as a conservation technique - if we use fewer materials, and reuse or recycle what we already are using, we lower our burden on the earth and its ecosystem.

As developers, we love the idea of reducing, reusing and recycling code. Just look at the prevalent use of StackOverflow, and the huge use of open source and libraries - if someone else has built it well - why not recycle the code and reuse it?

In this post, we'll apply the 3 R's of reduce, reuse and recycling to the topic of Conductor workflows - helping us create workflows that are compact, and easier to follow, and complete the desired task. Through our simplification of the workflow we'll also move from a workflow that is hardcoded to one specific task to a workflow that is more easily adapted to other similar uses - making the workflow more useful to the organization.

· 8 min read
Cameron Pavey

The microservice architecture pattern has been steadily gaining popularity in recent years. This architecture decomposes larger applications into smaller, more easily managed components.

While this can eliminate some of the challenges of working with large monolithic applications, breaking applications down into multiple decoupled pieces also presents some new challenges, such as determining how the microservices will communicate with each other.

This article compares two different approaches that offer solutions to this problem. These approaches are workflow orchestration and workflow choreography. While these concepts are similar in some regards, there are also key differences. This article highlights these differences by comparing the two concepts using the following criteria:

  • Definition: How is each concept defined?
  • Scalability: How well does each approach scale as applications increase in size and scope?
  • Communication: How do microservices communicate and transact data under each approach?
  • Strengths: What are the benefits afforded by each approach?
  • Limitations: What are the limitations of each approach?
  • Tools: What tools, if any, are there to help you facilitate each approach?

Definition

Before delving into the specific differences between these two approaches, it is good to have a high-level understanding of the definitions and goals of each.

Workflow orchestration describes an approach in which a single, centralized service—commonly known as the “orchestrator”—is responsible for invoking other services and handling and combining their responses to execute a composite business workflow.

In this approach, the orchestrator is aware of the big picture and the role played by each service. However, the services are not aware of anything beyond their interactions with the orchestrator.

Workflow orchestration

On the other hand, workflow choreography is a decentralized approach in which each service is responsible for invoking and responding to adjacent services.

This decentralization means that each service is aware of a small piece of the big picture, but only those parts in which the service plays an active role. The services are otherwise unaware of their overall position and relevance concerning the business workflow under execution.

Workflow choreography

Scalability

One of the key benefits of decomposing a system into microservices is that it enables better scalability. Whether your microservices are running in containers or dedicated virtual machines, there’s almost always a way to scale the number of instances of a given microservice up or down to meet demand at any given time.

With this in mind, it’s essential to consider the potential impact on scalability when it comes to either orchestration or choreography.

One immediate concern is whether the scalability of the services themselves is affected. In both approaches, the services can be abstracted away behind load balancers, such as those offered by AWS, or the load balancing functionality in Kubernetes.

Behind this abstraction, individual services can theoretically scale independently of any other concerns. In light of this, the next consideration is whether the orchestration and choreography patterns are scalable.

When considering orchestration, you need to account for a centralized component. This component—the orchestrator—will vary depending on your implementation, but one example is Netflix Conductor, an open source workflow orchestration platform.

Conductor is inherently scalable in this instance, claiming to support workloads “from a single workflow to millions of concurrent processes,” which would suggest that orchestration can be entirely scalable; that said, the degree to which this is the case will be somewhat affected by whichever tool is used to fill the role of orchestrator.

On the other hand, choreography has fewer considerations when it comes to scalability. The entire system should inherit this scalability as long as the services themselves are scalable, along with any other “connective pieces,” such as message brokers.

Communication

How the services communicate with each other is another key consideration when differentiating between orchestration and choreography. While the choice between these two approaches doesn’t necessarily dictate which mechanisms your services can use to communicate, it does help inform the specifics of how you would use these mechanisms in a given scenario.

Firstly, in orchestration, as you know, a central process is responsible for when and how services are invoked. In the case of a synchronous system where the orchestrator makes HTTP calls to services in series, the communication might look something like the following diagram.

Synchronous orchestration

Alternatively, you might wish to take an asynchronous approach, in which a message broker is used to store the information about jobs that the services must complete. In this case, your communication would look something like the following diagram.

Asynchronous orchestration

The orchestrator is now responsible for reading messages pushed by individual services and pushing messages so that other individual services can act on them.

In contrast, in workflow choreography, there is no central orchestrator and, thus, no central process that decides how services should be invoked. A given service may receive a request and act upon it, directly invoking whatever other services it needs. In a synchronous approach, this might look something like the following diagram.

Synchronous choreography

As you can see, each service is responsible for invoking and responding to any adjacent services as needed. This behavior is also true for asynchronous communication, with the main difference being the inclusion of a message broker instead of direct HTTP calls.

Asynchronous choreography

In this asynchronous approach to workflow choreography, each service subscribes to and publishes specific message types directly, rather than an orchestrator being responsible for mediating communication between services.

Strengths

As with most architectural patterns, each approach has strengths and limitations. The orchestration pattern reduces point-to-point communication between services by shifting the contextual awareness of the workflow to the orchestrator.

With this awareness, the orchestrator can be more resilient when individual services fail. Suppose a given service fails to respond as expected. In that case, the orchestrator can elegantly handle the error in several ways, whether by retrying immediately, re-queuing the task for later, or even just logging information about the error in greater detail than would otherwise be possible.

Workflow choreography also offers some benefits. Because each service is only concerned with other adjacent services and not with the overall shape of the system, it can be somewhat easier to add, change, and remove individual services frequently without disrupting other parts of the system.

Eliminating the orchestrator from your architecture also removes a potential bottleneck or point of failure. Choreography is also typically well-aligned with the serverless architecture pattern, as it supports scalable, short-lived services without the need for a long-running orchestrator.

Limitations

There are some limitations to each approach that need to be considered when comparing orchestration and choreography.

In orchestration, you need to account for a potential single point of failure, which is the orchestrator. If the orchestrator suffers from degraded performance or an outage, the entire system will be affected, even if the other microservices are still operational.

Because of this, it’s important to ensure that the orchestrator has redundancy and failover capabilities where possible. Similarly, having an orchestrator means that all of your services are tightly coupled to that orchestrator when it comes to execution.

On the other hand, when using choreography, rather than having a single point of failure, responsibility for the system’s resilience is now distributed. Any given service could fail at any time, and without a centralized orchestrator, recovery and diagnostics can be a lot harder.

In some cases, it may be possible to push a job to a queue to be retried, but in many cases, it might be necessary to abort the workflow and log as much information as possible. Because choreographed workflows lack a holistic context, the breadth of information you can log at this stage is typically somewhat diminished.

Tools

Workflow orchestration and choreography are both architectural patterns and, as such, can be implemented in many ways. Orchestration, in particular, has the added requirement of the orchestrator itself. There are numerous orchestration tools that can fulfill this role, such as Netflix Conductor and the fully managed, cloud-based version of Conductor, Orkes.

On the choreography side, there aren’t necessarily any specific tools, as choreography doesn’t require any specialized components like an orchestrator. Instead, you would do well to ensure that all of your services communicate over clearly defined, well-known APIs and that you have a robust logging and error management solution in place, such as those offered by Sentry or Datadog.

Both approaches still rely heavily on individual microservices, so tools and techniques that make microservices easier to manage could be beneficial, regardless of the approach you decide to take. These include things like load balancers and container orchestration (not to be confused with workflow orchestration) tools like Kubernetes.

Wrapping Up

This article explained the key differences between workflow orchestration and workflow choreography. You’ve seen how these two approaches differ and where they’re similar. The strengths and weaknesses of each have been touched upon, as well as some tools you can consider to help implement either approach.

Both approaches are technically valid and can work for your solution if implemented correctly. If you’re interested in learning more about orchestration, consider Orkes, a fully managed, cloud-based version of Netflix Conductor.

· 8 min read
Mohammed Osman

Businesses must be able to provide high-quality, innovative services to clients quickly in order to meet market demand. That can be difficult if an organization’s internal architecture doesn’t offer the needed agility and speed. The tightly coupled nature of monolithic architecture can block an IT team’s ability to make changes, separate team responsibilities, and perform frequent deployments. Microservices can provide a better alternative.

In microservices architecture, an application is built as a collection of separate, independently deployable services that are loosely coupled and more easily maintained.

In this article, you’ll learn about the benefits of switching to microservices and what factors to consider as you migrate your monolithic application toward microservices architecture.

Why Use Microservices Architecture?

Structuring your application as microservices offers you a range of benefits. AWS cites several of them, below.

· 10 min read
James Walker

Architecture diagram comparing monoliths and microservices

Monoliths are systems developed as one homogeneous unit. The architecture revolves around a single focal point that contains the system’s entire functionality. Distinct logical areas such as the client-side UI and backend APIs are all developed within one unit.

Breaking up monoliths is one of the most common objectives of modern software refactoring. Untamed monoliths can become bloated beasts full of interlinked functionality that’s difficult to reason about and maintain. Breaking the architecture up to reflect the system's logical areas makes for a more manageable codebase and can accelerate the implementation of new features.

This article looks at what monoliths are, how they differ from modern microservice-based approaches, and how you can start to break up a monolithic system. As we'd be remiss to claim microservices are a perfect solution, we'll also assess the situations where this migration might not make sense.

What's a Monolith?

"Monolith" has become a widely used term in the software industry, but it can mean slightly different things depending on who you ask. You’re probably dealing with a monolith if the system has multiple distinct units of functionality, but the project's codebase structure doesn’t mirror these. This results in little or no modularity within the system.

Monolith-based development strategies involve everyone working in the same repository irrespective of the type of feature they're building. User interface components sit side-by-side with business logic, API gateway integrations, and database routines. There’s little separation of concerns; components may directly interface with each other, resulting in fragility that makes it hard to make safe changes.

Here are some more common problems associated with monoliths:

  • Tight coupling: When all your components sit alongside each other, it can be difficult to enforce rigid separation of concerns. Over time, pieces become tightly coupled to each other, preventing you from replacing components and making the accurate mapping of control flows more difficult.

  • Fragility: The tight coupling observed in monolith systems leads to innate fragility. Making a change could have unforeseen consequences across the application, creating a risk of new problems each time you deploy a feature.

  • Cognitive burden: Putting all your components into one functional unit makes it harder for developers to find the pieces they need and understand how these relate to each other. You need to keep the entire system’s operation in your mind, creating a cognitive burden that only grows over time. Eventually, the monolith becomes too complex to understand; at this point, more errors can start creeping in.

  • Longer build and deployment times: Having everything in one codebase often leads to longer CI pipeline durations. Tools such as source vulnerability scanners, linters, and stylizers will take much longer to run when they have to look at all the code in your system each time they're used. Longer builds mean reduced throughput, limiting the amount of code you can ship each day. Developers can end up sitting idly while the automation runs to completion.

If you're experiencing any of the above, it might be time to start breaking up your monolith.

How Did We Get Here? Or, Why Monoliths Prevail

Monoliths aren't without their benefits. Here are a few good reasons to use a monolith that help to explain why the strategy remains so pervasive:

  • Reduced overhead: Not having to juggle multiple projects and manage the lifecycles of individual components does have advantages. You can focus more on functionality and get to work on each new feature straight away, without needing to set up a new service component. Please note that the simplicity of the monolith strategy is being considered here, not its impact on understanding the system you're encapsulating. As we've already seen, monoliths can make it harder to reason about characteristics of your system because everything is coupled together.

  • Easier to debug: When problems occur in a monolith, you know they can only derive from one source. As your whole system is a single unit, log aggregation is simple, and you can quickly jump between different areas to inspect complex problem chains. Determining the root cause of issues can be trickier when using microservices because faults may ultimately lie outside the service that sent the error report.

  • Straightforward deployment: Monoliths are easy to deploy because everything you need exists within one codebase. In many cases, web-based applications can be uploaded straight to a web server or packaged into an all-in-one container for the cloud. This is a double-edged sword: as shown above, your deployments will be rigid units with no opportunity for granular per-component scaling.

Though monoliths aren't all bad, it's important to recognize where things fall apart. Trouble often stems from teams not realizing they’re dealing with a monolith. This speaks to a disorganized development approach fixated on code, new features, and forward motion at the expense of longevity and developer experience.

Despite these pitfalls, monoliths can still be effectively used by intentionally adopting a similar structure. The Monorepo approach, for example, uses one source control repository to encapsulate multiple logical components. You still break your system into loosely coupled units, but they can sit alongside each other in a single overarching project. This approach forces you to be deliberate in your design while offering some of the benefits of both monoliths and microservices. Many large organizations opt for a monolith-like approach, including Google and Microsoft.

Why Should You Break up a Monolith?

Monoliths often develop organically over many years. Your codebase's silently growing scale may go unnoticed or be disregarded as a necessary by-product of the system's growth. The challenges associated with monoliths tend to become apparent when you need to scale your application or integrate a new technology.

A system treated as one all-encompassing unit makes it difficult to scale individual parts to meet fluctuations in demand. If your database layer starts to perform poorly, you'll need to "scale" by starting new instances of the entire application. Replacing specific components is similarly complex; they may be referenced in hundreds of places throughout the codebase, with no defined public interface.

Separating the pieces allows you to develop each one independently. This shields individual components from bugs in the broader system, helps developers focus on their specific areas, and unlocks the ability to scale your deployments flexibly. Now you can run three instances of your login gateway, two instances of your web UI, and a single replica of your little-used social media synchronization tool. This makes your system more efficient, lowering infrastructure costs.

Breaking up a monolith also gives you greater opportunities to integrate additional technologies into your stack. New integrations can be developed as standalone modules plugged in to your system. Other components can access the modules by making network calls over well-defined APIs, specifying what functionality is needed and how it will be used.

Monolith destruction often enhances the developer experience too, particularly in the case of new hires getting to grips with your codebase for the first time. Interacting with an unfamiliar monolith is usually a daunting experience that requires bridging different disciplines. Seemingly straightforward day-one tasks like adding a new UI component might need knowledge of your backend framework and system architecture, just to be able to pull data out of the tightly coupled persistence layer.

An Alternative Approach: Microservice Architectures

Microservice architectures are the effective antithesis to the monolithic view of a system as a single unit. The microservice strategy describes an approach to software development where your distinct functional units are spun out to become their own self-contained services. The capabilities of individual services are kept as small as possible, adhering to the single-responsibility principle and creating the "micro" effect.

Services communicate with each other through clearly defined interfaces. These usually take the form of HTTP APIs; services will make network calls to each other when they need to exchange data or trigger an external action. This decoupled approach is straightforward to extend, replace, and maintain. New implementations of a service have no requirements imposed on them other than the need to offer the same API surface. The replacement service can be integrated into your system by reconfiguring the application to call it instead of the deprecated version.

Microservices let you reason about logical parts of your stack in isolation. If you're working on a backend login system, you can concentrate on the parts that belong to it, without the distractions of your UI code. Changes are much less likely to break disparate parts of the system as each component can only be accessed by the API it provides. As long as that API remains consistent, you can be reasonably confident the broader application will stay compatible.

This architecture also solves the scalability challenges of monoliths. Splitting your application into self-contained pieces lets you treat each one as its own distinct deployment. You can allocate free resources to the parts of the system that most need them, reducing waste and enhancing overall performance.

Microservices do have some drawbacks, especially for people accustomed to a monolith approach. The initial setup of a distributed system tends to be more complex: you need to start each individual component, then configure the inter-component connections so services can reach each other. These steps require an understanding of your deployment platform's networking and service discovery capabilities.

Microservices can also be hard to reason about at the whole-system level. Fault sources are not always immediately clear. New classes of error emerge when the links between services are broken by flaky networking or misconfiguration. Setting up resilient monitoring and logging for each of your services is vital for tracing issues through the layers of your application. Microservice monitoring and log aggregation are distinct skills which have helped shape the modern operations engineer role, which is focused on the day-to-day deployment and maintenance of complex distributed systems.

Using an orchestration tool like Netflix Conductor - and using Orkes as a cloud based version of Conductor simplify many of these issues.

Conclusion

Monolithic systems contain all their functionality within a single unit, which initially seems like an approachable and efficient way to add functionality and evolve a system over time.

In practice, monoliths are often unsuitable for today’s applications. Breaking up monoliths is an important task for software teams to guarantee stable, ongoing development at a steady pace. Separating a system into its logical constituent parts forces you to acknowledge, understand, and document the connections and dependencies within its architecture.

This article explored the problems with monoliths and looked at how microservice approaches help to address them. You also learned how monolith-like systems can still be effective when microservices aren't suitable. Deciding whether you should break up a monolith comes down to one key question: Is your architecture holding you back, making it harder to implement the changes you need? If the answer is yes, your system's probably outgrown its foundations and would benefit from being split into self-contained functional units.

When you are looking at breaking up your monolith into microservices, look at Conductor as a tool to orchestrate your microservices. Try it for free in the Orkes Playground!

· 9 min read
Shweta

In large applications consisting of loosely coupled microservices, it makes sense to design the internal architecture of each microservice to suit its function rather than adhere to a single top-down architectural approach.

By design, each microservice is an independent entity that has its own data as well as business logic. So it’s intuitive to use a design approach and architecture that’s best suited to its requirements, irrespective of high-level microservices architecture. However, detractors would like you to believe that using multiple languages should be avoided as it adds unnecessary complexity and overheads to microservices operations.

But there are multiple use cases where multilanguage architecture makes sense, and technology can be used to efficiently manage the overheads introduced. In this article we will unpack:

  • When to build multilanguage microservices.
  • The challenges introduced in microservices communication due to the use of multiple languages.
  • Some tools and techniques to make multilanguage microservices implementation easier.

· 5 min read
Doug Sillars

Data processing and data workflows are amongst the most critical processes for many companies today. Many hours are spent collecting, parsing and analyzing data sets. There is no need for these to repetitive processes to be manual - we can automate them. In this post, we'll build a Conductor workflow that handles ETL (Extraction, Transformation and Loading) data for a mission critical process here at Orkes.

As a member of the Developer Relations team here at Orkes, we use a tool called Orbit to better understand our users and our community. Orbit has a number of great integrations that allow for easy connections into platforms like Slack, Discord, Twitter and GitHub. By adding API keys from the Orkes implementations of these social media platforms, the integrations automatically update the community data from these platforms into Orbit.

This is great, but it does not solve all of our needs. Orkes Cloud is based on top of Netflix Conductor, and we'd like to also understand who is interacting and using that GitHub repository. However, since Conductor is owned by Netflix, our team is unable to leverage the automated Orbit integration.

However, our API keys do allow us to extract the data from GitHub, and our Orbit API key can allow us to upload the extracted data into our data collection. We could do this manually, but why not build a COnductor workflow to do this process for us automatically?

In this post, I'll give a high level view of the automation required for Extraction of the data from Github, Transformation the data to a form that Orbit can accept, and then to Load the data into our Orbit collection.

· 3 min read
orkes

Tl;dr We are announcing a new community contribution repository for Conductor - https://github.com/Netflix/conductor-community, and Netflix and Orkes are partnering to co-manage it.

Conductor is a workflow orchestration engine developed and open-sourced by Netflix. At Netflix, Conductor is the de facto orchestration engine, used by a number of teams to orchestrate workflows at scale. If you are new to Conductor, we highly recommend taking a look at our GitHub repository and documentation.

Conductor was designed by Netflix to be extensible, making it easy to add or change components - even major components like queues or storage. This extendability makes adding new features and tasks easy to incorporate, without affecting the Conductor core. These implementations are based on well-defined interfaces and contracts that are defined in the core conductor repository.

Since open sourcing Conductor, we have seen huge community adoption and active interest from the community providing patches, features and extensions to Conductor.
Some of the key features that have been developed and contributed by the community into Conductor open source repository today are:

  1. Support for Postgres and MySQL backends
  2. Elasticsearch 7 support
  3. Integration with AMQP and NATS queues
  4. GRPC support
  5. Support for Azure blob stores
  6. Postgres based external payload storage
  7. Do While loops
  8. External Metrics collectors such as Prometheus and Datadog
  9. Support for Kafka and JQ tasks
  10. Various bug fixes, patches including most recent the fix for log4j vulnerabilities and many other more features and fixes

The number of community contributions, especially newer implementations of the core contracts in Conductor has increased over the past few years. We love that Conductor is finding use in many other organizations (link to the list), and that these organizations are submitting their changes back to the community version of Conductor,

This increase in engagement and growth of the community, while incredible, is a double edged sword. By no means does the Conductor team want to slow or limit these contributions, but the integration of third party implementations has been slower than we would like due to the team’s bandwidth.

In order to encourage (and to speed up the integration of) community-contributions to Conductor, we are announcing a new repository dedicated to supporting community contributions. The repository will be hosted at https://github.com/Netflix/conductor-community and will be seeded with the existing community contributed modules. Further, we are partnering with Orkes (https://orkes.io/) to co-manage the community repository along with Netflix, helping us with code reviews and creating releases for the community contributions.

We think this new structure will enable us to review the PRs and community contributions in a timely manner and allow the community to be more autonomous longer term.

We will continue to publish artifacts from the community repository at the same maven coordinates under com.netflix.conductor group and the artifact names will remain the same with full binary compatibility. This means that there is no change to users of Conductor: install, updates and usage remain the same.

Please see https://github.com/Netflix/conductor-community#readme for the details on the modules and release details. You can also find FAQs that address the most common questions.

We look forward to continued engagement with the community making Conductor the best open source orchestration engine.