LLM Index Text
The LLM Index Text task is designed to index the provided text into a vector space for efficient search, retrieval, and processing at a later stage.
It takes text input, processes it using a specified language model to generate embeddings, and stores these embeddings in a chosen vector database.
- Integrate the required AI model with Orkes Conductor.
- Integrate the required vector database with Orkes Conductor.
Task parameters
Configure these parameters for the LLM Index Text task.
Parameter | Description | Required/ Optional |
---|---|---|
inputParameters.vectorDB | The vector database to store the data. Note: If you haven’t configured the vector database on your Orkes Conductor cluster, navigate to the Integrations tab and configure your required provider. | Required. |
inputParameters.namespace | Namespaces are separate isolated environments within the database to manage and organize vector data effectively. Choose from the available namespace configured within the chosen vector database. The usage and terminology of the namespace field vary depending on the integration:
| Required. |
inputParameters.index | The index in your vector database where the text or data will be stored. The terminology of the index field varies depending on the integration:
| Required. |
inputParameters.embeddingModelProvider | The LLM provider for generating the embeddings. Note: If you haven’t configured your AI/LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. | Required. |
inputParameters.embeddingModel | The embedding model provided by the selected LLM provider to generate the embeddings. | Required. |
inputParameters.text | The text to be indexed. | Required. |
inputParameters.docId | A unique ID to identify the document where the indexed text will be stored. | Optional. |
inputParameters.dimensions | The size of the vector, which is the number of elements in the vector. | Optional. |
Caching parameters
You can cache the task outputs using the following parameters. Refer to Caching Task Outputs for a full guide.
Parameter | Description | Required/ Optional |
---|---|---|
cacheConfig.ttlInSecond | The time to live in seconds, which is the duration for the output to be cached. | Required if using cacheConfig. |
cacheConfig.key | The cache key is a unique identifier for the cached output and must be constructed exclusively from the task’s input parameters. It can be a string concatenation that contains the task’s input keys, such as ${uri}-${method} or re_${uri}_${method} . | Required if using cacheConfig. |
Schema parameters
You can enforce input/output validation for the task using the following parameters. Refer to Schema Validation for a full guide.
Parameter | Description | Required/ Optional |
---|---|---|
taskDefinition.enforceSchema | Whether to enforce schema validation for task inputs/outputs. Set to true to enable validation. | Optional. |
taskDefinition.inputSchema | The name and type of the input schema to be associated with the task. | Required if enforceSchema is set to true. |
taskDefinition.outputSchema | The name and type of the output schema to be associated with the task. | Required if enforceSchema is set to true. |
Other generic parameters
Here are other parameters for configuring the task behavior.
Parameter | Description | Required/ Optional |
---|---|---|
optional | Whether the task is optional. The default is false. If set to true, the workflow continues to the next task even if this task is in progress or fails. | Optional. |
Task configuration
This is the task configuration for an LLM Index Text task.
{
"name": "llm_index_text_task",
"taskReferenceName": "llm_index_text_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"embeddingModelProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"text": "${workflow.input.text}",
"docId": "XXXX",
"dimensions": "${workflow.input.dimensions}"
},
"type": "LLM_INDEX_TEXT"
}
Task output
There is no output. The LLM Index Text task will store the indexed data in the specified vector database.
Adding an LLM Index Document task in UI
To add an LLM Index Document task:
- In your workflow, select the (+) icon and add an LLM Index Document task.
- Choose the Vector database, Namespace, Index, Embedding model provider, and Embedding model.
- Enter the Text to be indexed.
- (Optional) Enter an arbitrary Doc ID to store the indexed text.