Skip to main content

· 14 min read
Nikhila Jain

A 2020 survey by the research and advisory firm Gartner has highlighted the rapid pace of innovation in cloud computing. According to the research, forty percent of enterprise solutions will host their applications on cloud infrastructure by 2023. This shifting trend will cause an increased demand for cloud services, as well as for hybrid cloud architecture.

The hybrid cloud is gaining popularity as enterprise IT leaders seek flexible, scalable options that increase cost efficiency while maintaining control over enterprise data and information. Many organizations combine on-premise infrastructure with private/public cloud resources to meet these needs.

But without the right strategy, hybrid clouds can pose a number of challenges. Through a hypothetical case study, this article will help you learn about the strengths and limitations of hybrid cloud architecture.

· 3 min read
Doug Sillars

Conductor is a workflow orchestration engine that connects all of your microservices together to create fully functional workflows that can run at scale. Each workflow is comprised of tasks - and many of these tasks are powered by external workers - or microservices. These workers can be written in any language - from Conductor's point of view - data goes in, and results come out - the language that processes the data is irrelevant.

Each worker has to connect to your Conductor instance, and regularly poll for work in the queue. There has long been Java, Go and Python SDKs to easily connect your apps to Conductor, but for building in other languages, this code had to be created by each development team.

Today, we announce that major improvements to the Golang and Python SDKs, and announce C# and Clojure SDKs.

Further, all of the (non-Java) SDKs have a new GitHub home: the Conductor SDK repository is your new source for Conductor SDKs:

Coming soon:

· 9 min read
Azeez Lukman

Microservices are a common and popular approach to building modular, scalable software with autonomous services. Large complex products are broken down into individual services responsible for a specific business function, such as user authentication or store checkout.

A microservice-based application might require several services to interact with each other to complete a business scope. The coordination of these interactions is known as a workflow or a saga. There are two models for implementing a workflow: choreography and orchestration. With choreography, you let each part of the system inform the other of its job and let it work out the details, while with orchestration, you rely on a central brain to guide and drive the execution processes.

As orchestrated systems have grown more expansive, the problem of efficiently orchestrating related business logics has become more pronounced. In this article, you will learn about the microservice orchestration workflow and its importance in relation to modern software architecture practices.

What is Microservice Orchestration?

A microservice orchestration pattern involves a central orchestration service (the orchestrator) that typically contains the entire business workflow logic and issues commands to and awaits responses from worker microservices. Think of this as an orchestra where a central conductor is responsible for keeping the orchestra in sync and coordinating the members to produce a cohesive musical piece. Using orchestrators for your application is essential for efficiently managing applications based on microservices.

Before going into the specifics of microservice orchestration, it is helpful to familiarize yourself with the components of microservice-based architecture. For example, in a microservice-based e-commerce application, the following could come into play during the process of purchasing a product:

  • a service for listing all products;

  • a service for adding products to the cart and reserving that product from the inventory;

  • a service for handling the payment; and

  • a service that manages the shipment of the item.

Each of these microservices is autonomous. In other words, microservices can be individually scaled up or down without having to worry about the entire application. However, they are all required to interact with each other to fulfill the purchase. It might be tempting to have the services talk to each other directly as needed. However, as your architecture and the number of services grow, this can quickly get messy and difficult to maintain. This is where orchestration comes into play.

A microservice orchestration workflow is an architectural method of coordinating microservices for software systems and applications, in which loosely coupled services receive commands from a central controller, referred to as the orchestrator. The orchestrator acts as a brain, driving the execution processes; it sends a call to each service and awaits a reply before proceeding. The concept of a microservice orchestration workflow can be best described through a hypothetical use case.

microservice orchestration workflow architecture diagram

The architectural diagram of this hypothetical use case shows the interactions between the various services involved in the process when following an orchestration workflow. Looking at the diagram above:

  1. The orchestrator receives a trigger that initializes the workflow, starting with “Products Service.”

  2. When this service has created an order with the products in the customer's cart, it returns some response to the orchestrator.

  3. The orchestrator then calls the “Inventory Service” to reserve the products in the cart.

  4. Next, the orchestrator calls the “Payment Service” to handle the payment.

  5. After successful payment, the orchestrator moves on to the “Shipping Service,” which clears the products for shipment.

In a choreography workflow, on the other hand, the microservices are not managed by a central service; however, they are all aware of the business goals and rely on certain events from other services that determine how they function. Each service publishes the actions it has taken to a message stream such as SQS or Kafka. Other services subscribe and listen for events they are interested in from these streams and take the appropriate actions.

choreography orchestration workflow architecture diagram

In the choreography architecture above:

  1. The “Products Service” creates an order with the items in the customer's cart and publishes an "Order Created" event to a stream on the messaging platform.

  2. The “Inventory Service” and “Payment Service” consume from this message stream. The “Inventory Service” handles reserving the products in the cart, and the “Payment Service” handles the payment and publishes the “Payment Success” event.

  3. On receipt of an inventory “Payment Success” event, the “Shipping Service” goes ahead and clears the products that were reserved for shipment to the customer.

Why is Microservice Orchestration Important?

Microservice architecture involves decomposing your application into a set of services to improve agility and allow teams to scale. One of the main purposes of this architectural pattern is to have each service as an independently deployable component with well-defined interfaces; in this way, the scope of implemented changes can be limited to a single service.

However, you must coordinate the execution of multiple microservices to deliver the outcomes that users want, and this is why microservice orchestration is important. Orchestration allows you to put a service in charge of the other services. The service in charge is aware of the entire flow that is required and is responsible for putting the other services to work to achieve those aims.

Microservice orchestration enables you to process flows ranging from simple linear workflows to very complex dynamic workflows that run for multiple days with minimal effort and high visibility into the processes. To properly illustrate the benefits you can obtain with an orchestration workflow when managing your microservices, let’s take a look at a case study from Netflix.

Netflix is an enterprise company that has shifted toward orchestration workflows. The streaming service had traditionally used the choreography method, which involves peer-to-peer tasks that are tightly coupled; this became harder to scale with growing business needs and associated increasing complexities like determining what remains for a movie setup to be complete and updating their SLAs.

Later, Netflix switched to an orchestration workflow and eventually built their own container orchestration engine—Conductor—which has helped orchestrate over 2.6 million process flows, from simple linear workflows to complex dynamic workflows over multiple days.

Why Are So Many Developers Adopting This Architectural Paradigm?

As mentioned, there are two major techniques you can use if you need to execute many services to get your desired result. Orchestration, in which a central orchestrator component serves as the coordinator and is in charge of activating each service, and choreography, in which the services perform independently and are only loosely connected.

Developers are increasingly adopting orchestration because it has significant benefits that can make development and management easier for individual microservices without compromising the big picture. However, it should be noted that microservice orchestration is not without its limitations.

Benefits and Limitations of Microservice Orchestration

There are several benefits and challenges associated with the implementation of an orchestration workflow, many of which are related to how microservices interact with one another to achieve a business outcome.

Benefits of Microservice Orchestration

Central observability of process definition, status, and metrics: The orchestration framework can capture detailed information about each executed process instance, which it can make available for analytics. This allows you to answer questions about specific instances (such as, “Where is my order?”), as well as analytical queries (such as, how many products were ordered).

Synchronous processes: These provide a good way to control the process flow. For example, when a product’s service needs to successfully complete before the inventory service is processed.

Scalable orchestration on cloud-native platforms: When you scale up these services, you scale with errors in mind. Microservice orchestration provides you with insights into your processes, helping you coordinate various transactions that involve a large number of independent services.

Single fail point: Orchestration workflow allows you to easily trace out any error that occurs during the process flow, figure out why it failed, and debug. Writing tests for your microservices is important to help prevent errors from making it to the live service.

Limitations of Microservice Orchestration

When orchestrating microservices in an enterprise environment, you’ll find that some business functions can require hundreds or even thousands of microservices. Since the orchestration workflow is synchronous, it's possible that such processes will take a long time to finish.

Furthermore, as the orchestrator needs to communicate with each service and get a response before moving to the next, this makes services highly dependent upon each other. Failure at any point could cause the entire process to fail. While for some business processes this is required behavior, others might require the process to complete regardless; for instance, running analytics on an order that’s being processed shouldn’t prevent the checkout flow from being completed.

Who Can Benefit from Microservice Orchestration?

With microservices gradually becoming the default pattern for managing business logics, a strong architecture is needed for their coordination. Adopting an orchestration workflow could improve the seamless interaction between these services.

Many businesses still implement service-oriented architectures (SOAs) orchestrated by an enterprise service bus (ESB). However, as business needs grow, adding more business logics and microservices to the system can be challenging; the entire flow is not immediately visible, making it harder to alter a service without the risk of disrupting another.

Microservice orchestration offers a solution here, as it helps you visualize the end-to-end processes across your microservices, so you know what services would be affected by your updates, allowing you to easily address your increasing business needs.

More concretely, an orchestration workflow might be ideal for you if one or more of the following are critical for your business:

  • The ability to track and manage workflows from a single point.

  • A user interface to visualize process flows.

  • The ability to synchronously process all tasks.

  • The ability to efficiently scale to a high number of concurrently running process flows.

  • A queuing service abstracted from clients.

  • The requirement to operate services over HTTP or other transport layers such as gRPC.

Conclusion

In this article, you learned about the importance of keeping your microservices autonomous and flexible. You also learned about the use of microservice orchestration to effectively communicate, visualize, identify, and resolve the challenges of managing microservices.

The downside of the process of building an orchestration system that implements all of the features your business requires is that it’s rather complex and time consuming. A purpose-built framework offering scalable and low-overhead orchestration— like Netflix’s Conductor, is an open source tool that fits this purpose.

Orkes is a platform that offers a fully managed, cloud-hosted version of Conductor with tiered support. Orkes builds on top of Netflix Conductor to abstract out installation, tuning, patching, and managing high-performing Conductor clusters. Learn more about Orkes and get started for free within minutes here.

· 4 min read
Doug Sillars

I have had the opportunity in my career to work on a number of very exciting projects. I am really excited with my role at Orkes, and getting to share what I am learning about Netflix Conductor. In this post, I have dipped into my memory banks to a project that could have really benefited from the power of Conductor's workflow orchestration - simplifying and streamlining processes with the power of microservices.

In a previous role (it almost seems a lifetime ago), I worked on a project at AT&T called Video Optimizer. It is an open source tool that is used to test mobile apps and video for issues that can affect phone battery life and data usage. Working with mobile app developers, my team was able to make top mobile applications more efficient and save battery (and saved AT&T $250M in network costs!)

But what does this have to do with Microservices?

Our monolithic problem

After a few years of working with the team, our application (a monolithic Java app) had gone through several team leads and developers, and had become a mess of spaghetti code and patches. No one was really sure how it all worked, and everyone 'touched wood', grabbed a rabbits' foot, or said a silent prayer at build time. To call our code 'fragile' was a kindness.

After a very long refactor, things were better. New features were again being released, and we were moving ahead.

Manual testing

All of our application testing was done manually - by a team of very talented testers - and the analysis was done by my team. We were always the limiting factor in finding new issues in app releases. We longed for an automated analysis that could tell us when a change occurred in a mobile app.

Microservices to the rescue

In mid 2017, there was an internal push for breaking up large apps into microservices, and there was a big funding pot set aside in AT&T to aid teams in migrating applications to microservices in the cloud.

We saw this as an opportunity to achieve several of our team's goals - making the project more structured (as microservices), but also launching a cloud based version with automated testing and basic reporting. So, we set out to re-architect the application into cloud based microservices.

Getting funding

At AT&T, to get access to the big pot of funding, we had to demonstrate that our team had all of the right ideas on how to migrate Video Optimizer into microservices. The team created excellent PowerPoint presentations of how we'd break VO into a set of microservices - and actually did some basic 'orchestration' by drawing arrows indicating the way that data would make its way through the new architecture.

We were successful!

WE got the money to build the new version of Video Optimizer! But privately, I was asking the team - how are we going to build the connecting lines between our microservices - how are they going to communicate and make sure the data flows properly between these small apps?

Workflow orchestration - connecting the dots

Of course, the problem at hand was workflow orchestration - getting these fast, modular microservices to work together and produce the results we were used to seeing in our monolith.

The feedback from the dev team was "oh, don't worry - we'll figure something out." (which did not really bode well for the project). What we needed was a robust and off the shelf tool to "wire up" all of our microservices.

I've been at Orkes for two months now, and am learning about the power of Netflix Conductor's ability to connect microservice workflows - I realize that Conductor would have served this project.

So What happened?

I left AT&T soon after we received the funding for the project. I reached out to my old team - the project did get built, but was unfortunately built on a proprietary internal infrastructure - which soon broke, and was never fixed - ending the vision of a cloud based version of Video Optimizer.

I can only imagine that if given the flexibility of a better cloud, and the power of Conductor, that this project would be going strong today. Using a tool like our Playground would have allowed the team to quickly mock up the connections between the microservices, and see that communication was working as expected.

Do you have projects that you've worked on, that in hindsight, would have been more successful with workflow orchestration? Tell us about it in our Discord channel.

· 7 min read

I learned how to code and fell in love with it during freshman year in college. I still remember that the first few classes were around concepts and were mostly lectures. While I was still getting an idea of what this is all about, it wasn’t until I did the first hands-on lab (in C!) when everything just snapped together and I found my passion. Soon after, I switched my major from Physics to Computer Engineering.

This also happens to be how most developers want to learn new things - by trying it out! We see that everyday when we talk to customers who might not be deeply familiar with Conductor. Quick recap: Conductor is the microservices and workflow orchestration platform originally created and battle tested at Netflix before gaining a large adoption with developers building applications across the spectrum of scale and use cases.

During such conversations, the excitement and curiosity is clearly higher when someone has actually played with Conductor, whether it's on their own time or together with us on a call. It is usually followed up with a concrete discussion around their questions or deep dives on particular topics. The Conductor documentation is a great resource to show how any developer can quickly install Conductor on their laptop or their cloud environment and get started on playing with it.

But we wanted to make it even easier and frictionless!

Introducing the Conductor Playground

We are excited to introduce the Conductor Playground, a fully managed browser based sandbox environment for developers to try out Conductor with no installs or configurations needed. It is completely free and is fully featured so that you can just focus on exploring how everything is organized, creating different workflows, executing them, seeing the results and more. The playground shows you what all are possible with Conductor and you can let your imagination run wild on the things you could build!

The Conductor Playground is built by running Conductor as a multi-tenant cluster. When you use it, a dedicated namespace is created for you so that the tasks and workflows you create, and the executions you invoke against them are visible only to you. We also intend the playground to be a place where we can publish the latest features so that the community gets a chance to try them out early on and give us feedback. And please do give us feedback - we go through each of them in detail and it's absolutely the best way for us to keep making Conductor even better and give back to the community.

We have also included a set of pre-built workflows for everyone so that you have a starting point in your exploration. These show the different ways in which workflows are defined and the various Conductor operators that make it possible for you to describe your business logic in an intuitive way. You can just execute them to see them in action or you can add more logic on top of that to make your own workflow to play with.

It is also worth noting the scope we had in mind when we built the playground so that you can better plan your journey of learning (and loving!) Conductor. While the playground is a great place to test your workflows and understand how Conductor works, it is not intended for production usage. When you are ready for running your production workloads, we recommend using some of the other ways in which Conductor can be run as dedicated for your needs. We are here to help you with that, whether you want to get going on running the open source Conductor on your own or use the fully managed service from Orkes Cloud which can run on your cloud or be hosted by Orkes.

Using the Conductor Playground

Using the playground is easy - just go to https://play.orkes.io/ and you will be presented with a simple login screen which allows us to create a dedicated namespace for you. Once you login, you will be dropped to the Conductor UI that organizes tasks, workflows and their executions for easy navigation. The Conductor documentation site has more details about the various components of the playground.

Running a pre-installed workflow

A great next step would be to explore the pre-installed workflows. You can do that by clicking on the Workflow Definitions link on the left navigation.

Let’s pick the PopulationMaxMin workflow. What it does is that it first queries the datausa.io API to get the population of the different states in the United States. This is done by the get_population_data task which is an HTTP system task that makes an outbound call to a HTTP/S endpoints, gets the result and can hand it over to other tasks downstream.

What the workflow does next is to fork two parallel tasks, one to find the state with the minimum population and the other to find the one with the maximum population. These tasks are of type JQ_TRANSFORM system task where you can process the incoming JSON structured data using JQ (Conductor moves inputs and outputs through a workflow as JSON objects). Finally, it joins the results of both and presents that to the user.

Now that you know what it does, let's test it out by running it! Click on the Run Workflow button on the left navigation and you will be presented with the screen below. Select the PopulationMaxMin workflow from the Workflow Name drop down menu - you can optionally add a unique string to the Correlation ID field if you want to query off of that later. Click Run Workflow and you just invoked an execution of the PopulationMaxMin workflow!

Click on the link below the Run Workflow button and you can see the visual and other details about this execution!

Build and run your own workflow with external workers

There is so much more you can do with Conductor! Going along with our approach of learning by playing, a good next step would be to learn about how external workers (e.g. a microservice written in your language of choice) can execute tasks defined in a workflow. This complements the system tasks from the earlier example where the execution of tasks happens within Conductor.

You can do that by building your own workflow from scratch as shown in this tutorial. In addition to step by step guidance, you can also find on Github the code for the example shown there.

Keep going!

We hope that these examples with the playground have helped you in understanding more about Conductor. You can continue on this journey on playground by referring to our documentation which has various How-To guides and in-depth reference documentation about the different components and operators of Conductor. Below are some popular topics.

Getting help and providing feedback

There are many ways to get help as you are learning more about Conductor

We want to keep making this Conductor Playground even more useful for developers, and we need your help in doing that. If you have any questions or feedback, however simple or complex it is, we want to hear from you! There is a link on the left navigation bar to provide us your Feedback - we would highly appreciate it if you could use that to let us know or using this link.

Happy playing & learning!

· 4 min read
Altaf Alam Ansari

Introduction

As the Introduction documentation shows, Netflix Conductor can be used for a variety of uses cases, solving complex workflow problems that plague many companies worldwide.

Let's now try to understand how we can use Conductor to solve a Lending problem that exists in the Banking and Fintech sector.

Use Case / Problem

In the new modern era of Fintech, bank customers are moving from traditional banking to digital banking. With that, there is an expectation that the processes that are run will be faster and more streamlined. Hence, in order to keep up with the customer demands various banks are trying to automate their banking processes. One common (and complicated) process that many banks are automating to its customers is the loan banking (lending) process.

Lending workflows can be very common and potential problem that can be solved by Conductor.

banking meme

· 3 min read

We're excited to share that after months of beta testing with Conductor users, Orkes Cloud is live today and ready to be battle tested with your Conductor deployment. Orkes Cloud delivers Conductor as a hosted service and abstracts away set up, tuning, patching, and managing high performance Conductor clusters. Enterprise features like role-based access control and single sign-on give you peace of mind. You have complete control over where you host your data and compute and can scale seamlessly based on your needs. We’re offering a free plan to get you started, or if you need a more all-encompassing enterprise plan you’ll only pay for what you use.

We hope you'll sign up for Orkes Cloud today and give us your feedback and feature requests. We're committed to supporting the Conductor community by giving back to the open-source project and collaborating with Netflix on the Conductor roadmap. Our ultimate goal in delivering Conductor as a hosted service is to help you delegate the operational complexities to Orkes so you can get back to building the next big thing.

Orkes has also closed $9.3 million in funding, an important milestone for our company as we continue to invest in building out our team, creating an enterprise ecosystem, and investing further in the open-source project and community.

We're honored to be working with some of the most pedigreed investors in enterprise cloud infrastructure. Dharmesh Thakker from Battery Ventures, Sandeep Bhadra of Vertex Ventures US, notable cloud angel investors Mahendra Ramsinghani and Gokul Rajaram, and seasoned executives at Fortune 100 companies have not just invested in Orkes, but were our early champions as we took on this endeavor. They shared our belief that the Conductor community deserves more cloud-native support and enterprise-grade features.

We always knew that what we built with Conductor at Netflix was special and important, but not just how much so until we dispersed to other large, fast-growth companies. Everywhere we looked, companies were moving toward a microservices-based, cloud-hosted architecture, offering unprecedented flexibility but with unique challenges to orchestrate all of those disparate pieces in a resilient, scalable fashion.

Conductor gets developers 90% of the way there. Because Conductor was built to scale with Netflix's massive growth, it is infinitely scalable to any level of complexity or bandwidth. However, the developers using Conductor OSS also have to spend a significant amount of time diverting their attention away from building applications to managing hosting, availability, cluster management and security patching. It's not time well spent, it's work that is a chore, and it's not the fun or strategic part of building applications.

Orkes Founders

Last year, I connected with my former colleagues at Netflix, Viren Baraiya, who originally came up with the idea for Conductor, and Boney Sekh, who was instrumental as we built it out, to talk about what we were seeing. We felt like we had a responsibility to help solve these operational problems for teams adopting Conductor and a cloud-based microservices architecture. When I reached out to my former colleague at Microsoft, Dilip Lukose, who also saw the growth of microservices himself as an early product leader at AWS, he was also excited about the potential to fill a critical hole in the market.

That's why we created Orkes. We can't wait to hear what you think and look forward to growing with you as we evolve Orkes Cloud to better serve your needs.

· 4 min read
Doug Sillars

In early Feb 2022, we had our first meetup on Netflix Conductor: Using Conductor in Production. It was also co-hosted by the Netflix Conductor team.

We had two excellent talks from Maros Marasalek at FRINX and Nick Tomlin at Netflix.

After the two talks, we had 2 roadmap sessions. The first session was by the Netflix Conductor team - where they discussed recent releases, and walked through the Open Source roadmap for the coming months. The second session, by our own Viren Baraiya, introduced Orkes, and our plans for extending Netflix Conductor.

Conductor in FRINX

Maros' presentation showed how the FRINX team has integrated Conductor into their product, and how the FRINX tooling helps to build custom workflows.

Bridging human and system workflows with Conductor

Nick Tomlin works on the Finance team at Netflix, and his team has built a set of Conductor workflows that enable other Netflix teams to quickly build and share workflows.

Netflix Conductor roadmap

Our third talk was from the Netflix Conductor team - where they presented the roadmap for Netflix Conductor for the coming months.

Introducing Orkes

Finally, one of our founders (and the committer of the first line of Conductor code) Viren Baraiya presented Orkes and our roadmap:

Q&A

Throughout the meetup, the attendees asked a number of great questions. They are reproduced here for visibility outside the meetup.

Hello, thanks for organizing such an event. I would like to know if there are any performance metrics for Conductor? We are planning to use it in a system with heavy traffic(multimillion requests each of them would trigger a workflow) and I would like to know if it’ll be reliable enough. Thank you :)

Conductor is horizontally scalable and we have known users scaling to handle workloads at the scale you mentioned. Here is a recent discussion on scale on Github, and a post from Netflix talking about the scale.

Conductor was built ground up for high reliability and performance. There are several companies that are running multi-million workflows on their core business flows.

I hear Netflix also is using Temporal.io which is cadence based. https://www.youtube.com/watch?v=JQ6FRTnQWFI How do the two overlap at Netflix and the reason behind the use of Temporal?

Conductor is the default workflow orchestration tool of choice at Netflix. That said, engineers are free to choose the tool that’s best for their needs. The Spinnaker team at Netflix felt that Temporal is best aligned with their needs.

You mention CLI. Any thoughts about SDK as well?

Yes, we are working on that - stay tuned :) (Note: Check out the Orkes video above on things we are planning for next couple quarters)

Can AWS Lambda be hooked up as a task in workflow

AWS Lambda can be hooked up, but it's not available as a default task. There is an extension available that will allow integrating with AWS lambdas. We can send you details about this.

How often are conductor versions released to the community?

Frequent releases - upto twice per month. You can see the recent releases here: https://github.com/Netflix/conductor/releases

From the previous presenter, what is the expected timeline for all those features to be released?

Most of the features we mentioned today will be released over the next 2-3 quarters in phases.

Any pointers about production level setup instructions of Dynomite (with the consideration of DR) on K8s cluster?

Please connect with us on our community slack channel or via Github discussions. Dynomite is not actively maintained anymore and is on its way towards being deprecated. There are alternatives that will offer similar features of Dynomite.

Thank you

Thank you to all the speakers for their incredible presentations, and also to our community for such great questions. This is our first of many meetups, and we hope to see all of you (and everyone else) at our next meetup.

In the meantime - don't forget to Star the Conductor repository.

For the latest updates on Conductor and Orkes, please subscribe to our YouTube channel, and follow us on Twitter.

· 5 min read
Doug Sillars

In our initial image processing workflow using Netflix Conductor, we initially built a workflow that takes one image, resizes it and uploads it to S3.

Image processing workflow diagram

In our 2nd post, we utilized a fork to create two images in parallel. When building this workflow, we reused all of the tasks from the first workflow, connecting them in a way that allows for the parallel processing of two images at once.

Two tasks are reused in both workflows: image_convert_resize and upload_toS3. This is one great advantage of using microservices - we create the service once and reuse it many times in different ways.

In this post, we'll take that abstraction a step further and replace the tasks in the two forks with a SUB_WORKFLOW. This allows us to simplify the full workflow by abstracting a frequently used set of tasks into a single task.

· 5 min read
Doug Sillars

In recent posts, we have built several image processing workflows with Conductor. In our first post, we created an image processing workflow for one image - where we provide an image along with the desired output dimensions and format. The workflow output is a link on Amazon S3 to the desired file.

In the 2nd example, we used the FORK System task to create multiple images in parallel. The number of images was hardcoded in the workflow - as FORK generates exactly as many paths as are coded into the workflow.

Several images are hardcoded in the workflow, but only 2 images are created. When it comes to image generation, there is often a need for more images (as new formats become popular) or sizes - as more screens are supported.

Luckily, Conductor supports this flexibility and has a feature to specify the number of tasks to be created at runtime. In this post, we'll demonstrate the use of dynamic forks, where the workflow splitting is done at runtime.

Learn how to create a dynamic fork workflow in this post!