LLM Text Complete
The LLM Text Complete task is used to generate a natural language response based on the provided context.
An LLM Text Complete task utilizes a large language model (LLM) to generate text predictions based on input context. The task configuration involves selecting an LLM provider, specifying the model, and defining the prompt and its variables. LLM parameters control the output, ensuring the generated text aligns with the desired randomness and length.
- Integrate the required AI model with Orkes Conductor.
- Create the required AI prompt for the task.
Task parameters
Configure these parameters for the LLM Text Complete task.
Parameter | Description | Required/ Optional |
---|---|---|
inputParameters.llmProvider | The integration name of the LLM provider integrated with your Conductor cluster. Note: If you haven’t configured your AI/LLM provider on your Orkes Conductor cluster, go to the Integrations tab and configure your required provider. | Required. |
inputParameters.model | The available language models within the selected LLM provider. For example, If your LLM provider is Azure Open AI and you’ve configured text-davinci-003 as the language model, you can select it here. | Required. |
inputParameters.promptName | The AI prompt created in Orkes Conductor. Note: If you haven’t created an AI prompt for your language model, refer to the documentation on creating AI Prompts in Orkes Conductor. | Required. |
inputParameters.promptVariables | For prompts that involve variables, provide the input to these variables within this field. It can be string, number, boolean, null, object/array. | Optional. |
inputParameters.temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. A lower value makes the output more deterministic and focused. Tip: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. | Optional. |
inputParameters.stopWords | List of words to be omitted during text generation. Supports string and object/array. In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. | Optional. |
inputParameters.topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold. Example: Imagine you want to complete the sentence: “She walked into the room and saw a __.” The top few words the LLM model would consider based on the highest probabilities would be:
| Optional. |
inputParameters.maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token is approximately four characters. | Optional. |
Caching parameters
You can cache the task outputs using the following parameters. Refer to Caching Task Outputs for a full guide.
Parameter | Description | Required/ Optional |
---|---|---|
cacheConfig.ttlInSecond | The time to live in seconds, which is the duration for the output to be cached. | Required if using cacheConfig. |
cacheConfig.key | The cache key is a unique identifier for the cached output and must be constructed exclusively from the task’s input parameters. It can be a string concatenation that contains the task’s input keys, such as ${uri}-${method} or re_${uri}_${method} . | Required if using cacheConfig. |
Schema parameters
You can enforce input/output validation for the task using the following parameters. Refer to Schema Validation for a full guide.
Parameter | Description | Required/ Optional |
---|---|---|
taskDefinition.enforceSchema | Whether to enforce schema validation for task inputs/outputs. Set to true to enable validation. | Optional. |
taskDefinition.inputSchema | The name and type of the input schema to be associated with the task. | Required if enforceSchema is set to true. |
taskDefinition.outputSchema | The name and type of the output schema to be associated with the task. | Required if enforceSchema is set to true. |
Other generic parameters
Here are other parameters for configuring the task behavior.
Parameter | Description | Required/ Optional |
---|---|---|
optional | Whether the task is optional. The default is false. If set to true, the workflow continues to the next task even if this task is in progress or fails. | Optional. |
Task configuration
This is the task configuration for an LLM Text Complete task.
{
"name": "llm_text_complete_task",
"taskReferenceName": "llm_text_complete_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"promptName": "translation",
"promptVariables": {
"input": "${workflow.input.input}",
"language": "${workflow.input.language}"
},
"temperature": 1,
"stopWords": ["a", "and", "the"],
"topP": 0.8,
"maxTokens": "150"
},
"type": "LLM_TEXT_COMPLETE"
}
Task output
The LLM Text Complete task will return the following parameters.
Parameter | Description |
---|---|
result | The completed text by the LLM. |
Adding an LLM Text Complete task in UI
To add an LLM Text Complete task:
- In your workflow, select the (+) icon and add an LLM Text Complete task.
- Choose the LLM provider, Model, and Prompt template.
- (Optional) Click +Add variable to provide the variable path if your prompt template includes variables.
- (Optional) Set the parameters Temperature, Stop words, TopP, and Token Limit.