LLM Chat Complete
A system task to complete the chat query. It can be used to instruct the model's behavior accurately to prevent any deviation from the objective.
Definitions
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4",
"instructions": "your-prompt-template",
"messages": [
{
"role": "user",
"message": "${workflow.input.text}"
}
],
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
},
"type": "LLM_CHAT_COMPLETE"
}
Input Parameters
Parameter | Description |
---|---|
llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider. Note:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate the LLM providers with Orkes console and provide access to required groups. |
model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access. For example, If your LLM provider is Azure Open AI & you’ve configured text-davinci-003 as the language model, you can choose it under this field. |
instructions | Set the ground rule/instructions for the chat so the model responds to only specific queries and will not deviate from the objective. Under this field, choose the AI prompt created. You can only use the prompts for which you have access. Note:If you haven’t created an AI prompt for your language model, refer to this documentation on how to create AI Prompts in Orkes Conductor and provide access to required groups. |
messages | Choose the role and messages to complete the chat query.
|
temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. Whereas a lower value makes the output more deterministic and focused. Example: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. |
stopWords | Provide the stop words to be omitted during the text generation process. In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold. For example: Imagine you want to complete the sentence: “She walked into the room and saw a __.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:
|
maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |
Output Parameters
The task output displays the completed chat by the LLM.
Examples
- UI
- JSON Example
- Add task type LLM Chat Complete.
- Choose the LLM provider, model & prompt template.
- Provide the input parameters.
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4",
"instructions": "your-prompt-template",
"messages": [
{
"role": "user",
"message": "${workflow.input.text}"
}
],
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
},
"type": "LLM_CHAT_COMPLETE"
}