LLM Text Complete
A system task to predict or generate the next phrase or words in a given text based on the provided context.
Definitions
{
"name": "llm_text_complete_task",
"taskReferenceName": "llm_text_complete_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"promptName": "translation",
"promptVariables": {
"input": "${workflow.input.input}",
"language": "${workflow.input.language}"
},
"temperature": 1,
"stopWords": [
"a",
"and",
"the"
],
"topP": 0.8,
"maxTokens": "150"
},
"type": "LLM_TEXT_COMPLETE",
}
Input Parameters
Parameter | Description |
---|---|
llmProvider | Select the required LLM provider. You can only choose providers to which you have access for at least one model from that provider. Note: If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and set it up. Refer to the documentation for integrating LLM providers with Orkes console and providing access to required groups. |
model | Choose from the available language models provided by the selected LLM provider. You can only choose models for which you have access. For example, If your LLM provider is Azure Open AI and you’ve configured text-davinci-003 as the language model, you can select it here. |
promptName | Select the AI prompt created in Orkes Conductor. You can only use the prompts for which you have access. Note:If you haven’t created an AI prompt for your language model, refer to the documentation on creating AI Prompts in Orkes Conductor and providing access to required groups. |
promptVariables | For prompts that involve variables, provide the input to these variables within this field. Refer to the documentation for different ways to pass parameters in Conductor. |
temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. Whereas a lower value makes the output more deterministic and focused. Example: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. |
stopWords | List of words to be omitted during text generation. Supports string and object/array formats. In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold. For example: Imagine you want to complete the sentence: “She walked into the room and saw a __.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:
|
maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately four characters. |
Output Parameters
Parameter | Description |
---|---|
result | Displays the completed text by the LLM. |
Examples
- UI
- JSON
- Add task type LLM Text Complete.
- Choose the LLM provider, model & prompt template.
- Provide the input parameters.
{
"name": "llm_text_complete_task",
"taskReferenceName": "llm_text_complete_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"promptName": "translation",
"promptVariables": {
"input": "${workflow.input.input}",
"language": "${workflow.input.language}"
},
"temperature": 1,
"stopWords": [
"a",
"and",
"the"
],
"topP": 0.8,
"maxTokens": "150"
},
"type": "LLM_TEXT_COMPLETE",
}