Updating Task Definitions
The API to update the existing task definitions.
Input Payload
You can update the task definitions directly via UI and using API. The task definitions include the following parameters:
Attribute | Description |
---|---|
name | Provide a unique name to identify the task. This field is mandatory. |
description | Include a description that indicates the purpose of the task. This field is optional. |
retryCount | The number of retries to attempt when a task fails. The default value is 3. |
retryDelaySeconds | Indicates the time (in seconds) to wait before each retry occurs. The default value is 60. |
retryLogic | Indicates the mechanism for the retries. It can take any of the following values:
|
timeOutSeconds | Time (in seconds), after which the task is marked as TIMED_OUT if not completed after transitioning to IN_PROGRESS status for the first time. No timeout occurs if the value is set to 0. |
pollTimeoutSeconds | Time (in seconds), after which the task is marked as TIMED_OUT if not polled by a worker. No timeout occurs if the value is set to 0. |
timeoutPolicy | Indicates the condition at which the task should time out. It can take any of the following values:
|
responseTimeoutSeconds | Set a time (in seconds) to reschedule the task if the task status is not updated. Default value is 3600. |
inputKeys | An array of keys for the task’s expected input. |
outputKeys | An array of keys for the task’s expected output. |
inputTemplate | Input templates are defined as part of task definitions. It acts as default input to the task while adding to the workflow. It can be overridden by values provided in Workflow definitions. Note: You cannot view the input templates in the workflow definition JSON code as they are part of only task definitions. But, on clicking the task, you can view the input templates supplied from the UI. |
concurrentExecLimit | Indicates the number of tasks that can be executed at any given time. For example, if you have 1000 task executions waiting in the queue and 1000 workers polling this queue for tasks, but if you have set concurrentExecLimit to 10, only ten tasks would be given to workers (which would lead to starvation). If any of the workers finish execution, a new task(s) will be removed from the queue while still keeping the current execution count to 10. |
backOffScaleFactor | The value to be multiplied with retryDelaySeconds in order to determine the interval for retry. |
rateLimitFrequencyInSeconds, rateLimitPerFrequency |
For example, let’s set rateLimitFrequencyInSeconds=5, and rateLimitPerFrequency=12. This means our frequency window is 5 seconds in duration, and for each frequency window, the Conductor would only give 12 tasks to workers. So, in a given minute, the Conductor would only give 12*(60/5) = 144 tasks to workers, irrespective of the number of workers polling for the task. Unlike concurrentExecLimit, rate limiting doesn't consider the tasks already in progress/completed. Even if all the previous tasks are executed within 1 sec or would take a few days, the new tasks are still given to workers at configured frequency, 144 tasks per minute in the above example. |
ownerEmail | This field will be auto-populated with the user's email address. |
API Endpoint
PUT /api/metadata/taskdefs
Client SDK Methods
- Java
- Go
- Python
- C#
- JavaScript
- Typescript
- Clojure
void OrkesMetadataClient.updateTaskDef(TaskDef taskDef)
func (a *MetadataResourceApiService) UpdateTaskDef(ctx context.Context, body model.TaskDef) (*http.Response, error)
MetadataResourceApi.update_task_def(body, **kwargs)
Object MetadataResourceApi.UpdateTaskDef(TaskDef body)
MetadataResourceService.updateTaskDef(requestBody: TaskDef): CancelablePromise<any>
MetadataResourceService.updateTaskDef(requestBody: TaskDef): CancelablePromise<any>
(metadata/update-task-definition options task-definition)