Orkes logo image
Product
Platform
Orkes Platform thumbnail
Orkes Platform
Orkes Agentic Workflows
Orkes Conductor Vs Conductor OSS thumbnail
Orkes vs. Conductor OSS
Orkes Cloud
How Orkes Powers Boat Thumbnail
How Orkes Powers BOAT
Try enterprise Orkes Cloud for free
Enjoy a free 14-day trial with all enterprise features
Start for free
Capabilities
Microservices Workflow Orchestration icon
Microservices Workflow Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Realtime API Orchestration icon
Realtime API Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Event Driven Architecture icon
Event Driven Architecture
Create durable workflows that promote modularity, flexibility, and responsiveness.
Human Workflow Orchestration icon
Human Workflow Orchestration
Seamlessly insert humans in the loop of complex workflows.
Process orchestration icon
Process Orchestration
Visualize end-to-end business processes, connect people, processes and systems, and monitor performance to resolve issues in real-time
Use Cases
By Industry
Financial Services icon
Financial Services
Secure and comprehensive workflow orchestration for financial services
Media and Entertainment icon
Media and Entertainment
Enterprise grade workflow orchestration for your media pipelines
Telecommunications icon
Telecommunications
Future proof your workflow management with workflow orchestration
Healthcare icon
Healthcare
Revolutionize and expedite patient care with workflow orchestration for healthcare
Shipping and logistics icon
Shipping and Logistics
Reinforce your inventory management with durable execution and long running workflows
Software icon
Software
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean leo mauris, laoreet interdum sodales a, mollis nec enim.
Docs
Developers
Learn
Blog
Explore our blog for insights into the latest trends in workflow orchestration, real-world use cases, and updates on how our solutions are transforming industries.
Read blogs
Check out our latest blog:
Conductor CLI Guide: Register, Run, Retry, and Recover Durable Workflows Without Leaving Your Terminal đź’»
Customers
Discover how leading companies are using Orkes to accelerate development, streamline operations, and achieve remarkable results.
Read case studies
Our latest case study:
Twilio Case Study Thumbnail
Orkes Academy New!
Master workflow orchestration with hands-on labs, structured learning paths, and certification. Build production-ready workflows from fundamentals to Agentic AI.
Explore courses
Featured course:
Orkes Academy Thumbnail
Events icon
Events
Videos icons
Videos
In the news icon
In the News
Whitepapers icon
Whitepapers
About us icon
About Us
Pricing
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Orkes logo image

Company

Platform
Careers
HIRING!
Partners
About Us
Legal Hub
Security

Product

Cloud
Platform
Support

Community

Docs
Blogs
Events

Use Cases

Microservices Workflow Orchestration
Realtime API Orchestration
Event Driven Architecture
Agentic Workflows
Human Workflow Orchestration
Process Orchestration

Compare

Orkes vs Camunda
Orkes vs BPMN
Orkes vs LangChain
Orkes vs Temporal
Twitter or X Socials linkLinkedIn Socials linkYouTube Socials linkSlack Socials linkGithub Socials linkFacebook iconInstagram iconTik Tok icon
© 2026 Orkes. All Rights Reserved.
Back to Blogs

Table of Contents

Share on:Share on LinkedInShare on FacebookShare on Twitter
Worker Code Illustration

Get Started for Free with Dev Edition

Signup
Back to Blogs
ENGINEERING

Orchestrating Long-Running APIs | A Step-by-Step Guide to Asynchronous Execution

Harshil
Harshil
Software Engineer
Last updated: April 4, 2025
April 4, 2025
7 min read

Related Blogs

Time to Finally Understand Orchestration vs. Choreography

Oct 3, 2025

Time to Finally Understand Orchestration vs. Choreography

Orchestrating Asynchronous Workflows (How Are They Different from Synchronous?)

Mar 26, 2025

Orchestrating Asynchronous Workflows (How Are They Different from Synchronous?)

Best Practices for Production-Scale RAG Systems — An Implementation Guide

Feb 20, 2025

Best Practices for Production-Scale RAG Systems — An Implementation Guide

Ready to Build Something Amazing?

Join thousands of developers building the future with Orkes.

Start for free

Handling long-running APIs is a common challenge in modern applications. Whether you’re processing large datasets, generating reports, or interacting with third-party services, some API calls exceed the 30–60 second timeout window allowed by most systems. When an API takes too long to respond, workflows can break—resulting in failed tasks, degraded user experiences, or unnecessary retries.

To address this, we’ll implement a solution using asynchronous orchestration. Instead of waiting for the long-running task to complete, the workflow triggers it asynchronously, stores its progress in an external database, and uses polling to track its status until completion.

In this blog, we’ll walk through a scalable solution using Orkes Conductor—an orchestration platform—along with AWS Lambda, Amazon API Gateway, and Amazon DynamoDB.

Why are long-running APIs difficult?

APIs that take several minutes to complete are not uncommon, especially in systems that rely on compute-heavy workloads or external services. However, traditional synchronous API calls aren’t equipped to handle extended processing delays.

Let’s consider a simple HTTP task in Orkes Conductor or any workflow engine. These tasks typically expect a response within a certain timeout window (e.g., 60 seconds). If the external API takes longer than that, the task fails. This limitation affects:

  • User-facing features–UI elements may hang or fail without feedback.
  • System performance–Workflow fails, and retries kick in, increasing the load on compute resources and downstream services.
  • Observability–There’s no clear visibility into the task’s state if the response never arrives.

Stretching timeout windows or blocking workflows isn’t a scalable solution. A better approach is a more resilient workflow architecture pattern—one that embraces asynchronous execution and decouples task initiation from task completion.

Solution: Use asynchronous HTTP invocation with polling

To reliably handle long-running APIs, we’ll implement an asynchronous invocation with a polling pattern using Orkes Conductor and AWS services. This avoids timeouts while maintaining full control and visibility over the process.

At a high level, the pattern involves:

  • Triggering the long-running task asynchronously
  • Tracking its execution state in an external store
  • Polling the task status until it is completed

Architecture for implementing synchronous HTTP invocation with polling

Implementing asynchronous HTTP invocation with polling

We’ll implement this orchestration pattern using:

  • AWS Lambda–A serverless computing service that simulates the long-running task.
  • Amazon DynamoDB–A NoSQL database to store and track the status of each request.
  • Amazon API Gateway–A REST API layer to expose Lambda endpoints for invocation and polling.
  • Orkes Conductor–The orchestration platform that orchestrates the asynchronous flow.

How Asynchronous Orchestration Works

The core idea behind this pattern is:

  1. Trigger the long-running task asynchronously

The workflow initiates the long-running process using an HTTP task configured to perform a non-blocking request (e.g., invoking a Lambda function asynchronously via API Gateway). The task immediately returns a requestId, which is used as a unique reference for tracking progress.

  1. Track task progress externally

The Lambda function logs the execution state (e.g., PROCESSING or COMPLETED) in a NoSQL database like Amazon DynamoDB, which serves as a persistent data store for tracking task progress externally.

  1. Poll the task status until completion

Conductor workflow periodically queries the status of the request using an HTTP Poll task. The polling continues until the task is marked as completed in the external system.

This model allows the workflow to remain active and aware of the task’s progress without exceeding timeout thresholds or blocking system resources.

Now that we understand the pattern conceptually, let’s walk through how to implement this using Orkes Conductor and AWS services.

Building a long-running API workflow

In this section, we’ll implement the long-running API orchestration pattern using Orkes Conductor. To follow along, sign in to the free Developer Edition.

Part 1: Set up AWS resources

Step 1: Create an Amazon DynamoDB table for status tracking

Start by creating a table in DynamoDB named lambda-invocation-status with requestId as the partition key.

Creating table in Amazon DynamoDB

Creating a table in Amazon DynamoDB

This table stores the execution status of each request.

Step 2: Set up a long-running task in AWS Lambda

Next, we’ll create a Lambda function that:

  • Accepts a requestId and an action
  • Simulates a long-running task
  • Updates the task’s status in DynamoDB

Create an AWS Lambda function using the Python code:

python
import json
import time
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('lambda-invocation-status')

def lambda_handler(event, context):
    request_id = event['requestId']
    action = event.get("action")

    if action == "invoke":
        table.put_item(Item={'requestId': request_id, 'status': 'PROCESSING'})

        # Simulate long processing time
        time.sleep(2)
        process_long_running_task(request_id)

        return {
            "statusCode": 202,
            "body": json.dumps({"requestId": request_id, "status": "PROCESSING"})
        }

    elif action == "status":
        response = table.get_item(Key={'requestId': request_id})
        item = response.get('Item')
        if item:
            return {"statusCode": 200, "body": json.dumps(item)}
        else:
            return {"statusCode": 404, "body": json.dumps({"status": "NOT_FOUND"})}

def process_long_running_task(request_id):
    time.sleep(25)  # Simulate processing time
    table.update_item(
        Key={'requestId': request_id},
        UpdateExpression='SET #s = :val',
        ExpressionAttributeNames={'#s': 'status'},
        ExpressionAttributeValues={':val': 'COMPLETED'}
    )

Adjust the Lambda timeout to 1 minute and attach an IAM role with complete access to DynamoDB.

Creating a Lambda function in Python

Creating an AWS Lambda function

Step 3: Set up API Gateway endpoints

Create a REST API in Amazon API Gateway and expose the Lambda function with two endpoints:

  • POST /invoke-lambda–Triggers the task.

API for invoking Lambda function

  • GET /{requestId}–Returns the current task status.

API for getting the long running API task status

Deploy the API and note the endpoint URL—they’ll be used in the workflow definition.

Part 2: Build the Orkes Conductor workflow

With your AWS resources ready, it’s time to orchestrate the flow in Orkes Conductor. Let’s build a workflow that:

  1. Invokes the Lambda function using an HTTP task.
  2. Polls the status endpoint until the task is completed using the HTTP Poll task.

Step 1: Invoke Lambda asynchronously

Use an HTTP task in Conductor to call the Lambda endpoint via API Gateway, passing a requestId in the body. The Lambda will start the task and return a 202 response.

Calling Lambda endpoint using HTTP task in Orkes Conductor

Calling Lambda endpoint using HTTP task

Step 2: Poll for task completion

Use an HTTP Poll task to query the status endpoint at regular intervals. The polling continues until the external system returns a "COMPLETED" status.

This behavior is configured using the following termination condition in the HTTP Poll task:

json
(function(){
  return $.output.response.statusCode == 200 &&
         $.output.response.body.body.status == "COMPLETED";
})();

Querying long running task status at regular intervals using HTTP Poll task in Orkes Conductor

Querying long running task status at regular intervals using HTTP Poll task

Creating Conductor Workflow

Create a workflow in Developer Edition by navigating to Definitions > Workflow. In the Code tab, paste the following code:

json
{
 "name": "LongRunningAPIWorkflow",
 "description": "Sample workflow",
 "version": 1,
 "tasks": [
   {
     "name": "InvokeLambdaTask",
     "taskReferenceName": "invokeLambda",
     "inputParameters": {
       "http_request": {
         "uri": "https://<your-api-gateway-id>.execute-api.<your-region>.amazonaws.com/test/invoke-lambda",
         "method": "POST",
         "headers": {
           "Content-Type": "application/json",
           "X-Amz-Invocation-Type": "Event"
         },
         "body": {
           "requestId": "${workflow.input.requestId}"
         },
         "accept": "application/json",
         "contentType": "application/json"
       }
     },
     "type": "HTTP"
   },
   {
     "name": "http_poll",
     "taskReferenceName": "http_poll_ref",
     "inputParameters": {
       "http_request": {
         "uri": "https://<your-api-gateway-id>.execute-api.<your-region>.amazonaws.com/test/${workflow.input.requestId}",
         "method": "GET",
         "accept": "application/json",
         "contentType": "application/json",
         "terminationCondition": "(function() { return $.output.response.statusCode == 200 && $.output.response.body.body.status == \\\"COMPLETED\\\"; })();",
         "pollingInterval": "10",
         "pollingStrategy": "FIXED",
         "encode": true
       }
     },
     "type": "HTTP_POLL"
   }
 ],
 "inputParameters": [
   "requestId"
 ],
 "schemaVersion": 2
}

Your workflow will look like this:

Long Running API workflow

Long Running API workflow in Orkes Conductor

Before running the workflow, update the URLs in your task definitions with your actual deployed API Gateway endpoints.

In InvokeLambdaTask (HTTP task), replace the URL with your deployed API endpoint for invoking the Lambda:

json
https://<your-api-gateway-id>.execute-api.<your-region>.amazonaws.com/<stage>/invoke-lambda

In the http_poll task, replace the polling URL with the endpoint for checking Lambda status:

json
https://<your-api-gateway-id>.execute-api.<your-region>.amazonaws.com/<stage>/${workflow.input.requestId}

Once configured, test the workflow with a sample input:

json
{
  "requestId": "12345"
}

The requestId serves as a unique identifier for each task instance. It is passed into the workflow, forwarded to the Lambda function, and used to poll the task’s status from DynamoDB.

When the workflow is run, the InvokeLambdaTask triggers the Lambda function. The http_poll task continuously checks the status until it is completed.

Congratulations—you’ve successfully orchestrated a long-running API using Orkes Conductor and AWS. For a complete implementation walkthrough, refer to the tutorial on orchestrating long-running APIs.

Summary

Long-running APIs can easily break synchronous workflows, but with exemplary architecture, you can handle them reliably and at scale.

By combining asynchronous invocation, external status tracking, and polling using Orkes Conductor, you can build resilient and timeout-proof workflows. This pattern can be adapted for a wide range of real-world scenarios, from third-party service orchestration to internal background processing.

Ready to build scalable workflows that can handle long-running operations without breaking? Try out using Orkes Conductor.

–

Orkes Conductor is an enterprise-grade orchestration platform for process automation, API and microservices orchestration, agentic workflows, and more. Check out the full set of features, or try it yourself using our free Developer Edition.