Skip to main content

LLM Chat Complete

The LLM Chat Complete task is used to complete a chat query based on additional instructions. It can be used to govern the model's behavior to minimize deviation from the intended objective.

The LLM Chat Complete task processes a chat query by taking the user's input and generating a response based on the supplied instructions and parameters. This helps the model to stay focused on the objective and provides control over the model's output behavior.

Task parameters

Configure these parameters for the LLM Chat Complete task.

ParameterDescriptionRequired/Optional
inputParameters. llmProviderThe LLM provider. You can choose providers for which you have access to at least one model.

Note: If you haven’t configured your AI/LLM provider on your Orkes Conductor cluster, go to the Integrations tab and set it up. Refer to the documentation for integrating LLM providers with Orkes Conductor.
Required.
inputParameters. modelThe available language models provided by the selected LLM provider. You can only choose models for which you have access.

For example, If your LLM provider is Azure Open AI and you’ve configured text-davinci-003 as the language model, you can select it here.
Required.
inputParameters. instructionsThe ground rules or instructions for the chat so the model responds to only specific queries and will not deviate from the objective. Under this section, you can also save the instructions as an AI prompt and add them here. Only prompts that you have access to can be used.

Note: If you haven’t created an AI prompt for your language model, refer to the documentation on creating AI Prompts in Orkes Conductor.
Required.
inputParameters. promptVariablesFor prompts that involve variables, provide the input to these variables within this field. It can be string, number, boolean, null, object/array.Optional.
inputParameters. messagesThe appropriate role and messages to complete the chat query. Supported values:
  • role
  • message
Optional.
inputParameters. messages.roleThe required role for the chat completion. Available options include user, assistant, system, or human.
  • The roles “user” and “human” represent the user asking questions or initiating the conversation.
  • The roles “assistant” and “system” refer to the model responding to the user queries.
Optional.
inputParameters. messages.messageThe corresponding input message to be provided. It can also be passed as variables.Optional.
inputParameters. temperatureA parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. A lower value makes the output more deterministic and focused.

Tip: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting.
Optional.
inputParameters. stopWordsList of words to be omitted during text generation. Supports string and object/array.

In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant.
Optional.
inputParameters. topPAnother parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.

Example: Imagine you want to complete the sentence: “She walked into the room and saw a __.” The top few words the LLM model would consider based on the highest probabilities would be:
  • Cat - 35%
  • Dog - 25%
  • Book - 15%
  • Chair - 10%
If you set the top-p parameter to 0.70, the LLM model will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:
  1. Add "Cat" (35%) to the cumulative probability.
  2. Add "Dog" (25%) to the cumulative probability, totaling 60%.
  3. Add "Book" (15%) to the cumulative probability, now at 75%.
At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the LLM will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood.
Optional.
inputParameters. maxTokensThe maximum number of tokens to be generated by the LLM and returned as part of the result. A token is approximately four characters.Optional.
inputParameters. jsonOutputDetermines whether the LLM’s response is to be parsed as JSON. When set to ‘true’, the model’s response will be processed as structured JSON data.Optional.

Task configuration

This is the task configuration for an LLM Chat Complete task.

{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4",
"instructions": "your-prompt-template", // Can be harcoded instructions or select the prompt here
"messages": [
{
"role": "user",
"message": "${workflow.input.text}"
}
],
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "spam",
"promptVariables": {
"text": "${workflow.input.text}",
"language": "${workflow.input.language}"
},
"jsonOutput": true
},
"type": "LLM_CHAT_COMPLETE"
}

Task output

The LLM Chat Complete task will return the following parameters.

ParameterDescription
resultThe completed chat generated by the LLM.

Adding an LLM Text Complete task in UI

To add an LLM Text Complete task:

  1. In your workflow, select the (+) icon and add an LLM Text Complete task.
  2. Choose the LLM provider and Model.
  3. In the Instructions field, set the ground rules or instructions to ensure the model responds only to specific queries. You can save these instructions as an AI prompt and add them here.
  4. (Optional) Click +Add variable to provide the variable path if your prompt template includes variables.
  5. (Optional) Click +Add message and choose the appropriate role and messages to complete the chat query.
  6. (Optional) Set the parameters Temperature, Stop words, TopP, and Token Limit.
  7. (Optional) Enable JSON output to format the LLM’s response as a structured JSON.

LLM Chat Complete Task - UI