Skip to main content

LLM Text Complete

The LLM Text Complete task is used to generate a natural language response based on the provided context.

An LLM Text Complete task utilizes a large language model (LLM) to generate text predictions based on input context. The task configuration involves selecting an LLM provider, specifying the model, and defining the prompt and its variables. LLM parameters control the output, ensuring the generated text aligns with the desired randomness and length.

Task parameters

Configure these parameters for the LLM Text Complete task.

ParameterDescriptionRequired/Optional
inputParameters.llmProviderThe LLM provider. You can choose providers for which you have access to at least one model.

Note: If you haven’t configured your AI/LLM provider on your Orkes Conductor cluster, go to the Integrations tab and set it up. Refer to the documentation for integrating LLM providers with Orkes Conductor.
Required.
inputParameters.modelThe available language models provided by the selected LLM provider. You can only choose models for which you have access.

For example, If your LLM provider is Azure Open AI and you’ve configured text-davinci-003 as the language model, you can select it here.
Required.
inputParameters.promptNameThe AI prompt created in Orkes Conductor. You can only use the prompts for which you have access.

Note: If you haven’t created an AI prompt for your language model, refer to the documentation on creating AI Prompts in Orkes Conductor.
Required.
inputParameters.promptVariablesFor prompts that involve variables, provide the input to these variables within this field. It can be string, number, boolean, null, object/array.Optional.
inputParameters.temperatureA parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. A lower value makes the output more deterministic and focused.

Tip: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting.
Optional.
inputParameters.stopWordsList of words to be omitted during text generation. Supports string and object/array.

In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant.
Optional.
inputParameters.topPAnother parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.

Example: Imagine you want to complete the sentence: “She walked into the room and saw a __.” The top few words the LLM model would consider based on the highest probabilities would be:
  • Cat - 35%
  • Dog - 25%
  • Book - 15%
  • Chair - 10%
If you set the top-p parameter to 0.70, the LLM model will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:
  1. Add "Cat" (35%) to the cumulative probability.
  2. Add "Dog" (25%) to the cumulative probability, totaling 60%.
  3. Add "Book" (15%) to the cumulative probability, now at 75%.
At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the LLM will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood.
Optional.
inputParameters.maxTokensThe maximum number of tokens to be generated by the LLM and returned as part of the result. A token is approximately four characters.Optional.

Task configuration

This is the task configuration for an LLM Text Complete task.

{
"name": "llm_text_complete_task",
"taskReferenceName": "llm_text_complete_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"promptName": "translation",
"promptVariables": {
"input": "${workflow.input.input}",
"language": "${workflow.input.language}"
},
"temperature": 1,
"stopWords": ["a", "and", "the"],
"topP": 0.8,
"maxTokens": "150"
},
"type": "LLM_TEXT_COMPLETE"
}

Task output

The LLM Text Complete task will return the following parameters.

ParameterDescription
resultThe completed text by the LLM.

Adding an LLM Text Complete task in UI

To add an LLM Text Complete task:

  1. In your workflow, select the (+) icon and add an LLM Text Complete task.
  2. Choose the LLM provider, Model, and Prompt template.
  3. (Optional) Click +Add variable to provide the variable path if your prompt template includes variables.
  4. (Optional) Set the parameters Temperature, Stop words, TopP, and Token Limit.

LLM Text Complete Task - UI