Skip to main content

LLM Generate Embeddings

The LLM Generate Embeddings task is used to convert input text into a sequence of vectors, also known as embeddings. These embeddings are processed versions of the input text and can be stored in a vector database for later retrieval. This task utilizes a previously integrated language model (LLM) to generate the embeddings.

The LLM Generate Embeddings task takes the input text and processes it through the selected language model (LLM) to produce embeddings. The task evaluates the specified parameters, such as the LLM provider and model, and generates a corresponding sequence of vectors for the provided text. The output is a JSON array containing these vectors, which can be used in subsequent tasks or stored for future use.

Prerequisites

Task parameters

Configure these parameters for the LLM Generate Embeddings task.

ParameterDescriptionRequired/ Optional
inputParameters.llmProviderThe integration name of the LLM provider integrated with your Conductor cluster.

Note: If you haven’t configured your AI/LLM provider on your Orkes Conductor cluster, go to the Integrations tab and configure your required provider.
Required.
inputParameters.modelThe available language models within the selected LLM provider.

For example, If your LLM provider is Azure Open AI and you’ve configured text-davinci-003 as the language model, you can select it here.
Required.
inputParameters.textThe text to be converted and stored as a vector. It can also be passed as parameters.Required.
inputParameters.dimensionsThe size of the vector, which is the number of elements in the vector.Optional.

Caching parameters

You can cache the task outputs using the following parameters. Refer to Caching Task Outputs for a full guide.

ParameterDescriptionRequired/ Optional
cacheConfig.ttlInSecondThe time to live in seconds, which is the duration for the output to be cached.Required if using cacheConfig.
cacheConfig.keyThe cache key is a unique identifier for the cached output and must be constructed exclusively from the task’s input parameters.
It can be a string concatenation that contains the task’s input keys, such as ${uri}-${method} or re_${uri}_${method}.
Required if using cacheConfig.

Schema parameters

You can enforce input/output validation for the task using the following parameters. Refer to Schema Validation for a full guide.

ParameterDescriptionRequired/ Optional
taskDefinition.enforceSchemaWhether to enforce schema validation for task inputs/outputs. Set to true to enable validation.Optional.
taskDefinition.inputSchemaThe name and type of the input schema to be associated with the task.Required if enforceSchema is set to true.
taskDefinition.outputSchemaThe name and type of the output schema to be associated with the task.Required if enforceSchema is set to true.

Other generic parameters

Here are other parameters for configuring the task behavior.

ParameterDescriptionRequired/ Optional
optionalWhether the task is optional. The default is false.

If set to true, the workflow continues to the next task even if this task is in progress or fails.
Optional.

Task configuration

This is the task configuration for an LLM Generate Embeddings task.

   {
"name": "llm_generate_embeddings_task",
"taskReferenceName": "llm_generate_embeddings_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"text": "${workflow.input.text}",
"dimensions": "${workflow.input.dimensions}"
},
"type": "LLM_GENERATE_EMBEDDINGS"
}

Task output

The LLM Generate Embeddings task will return the following parameters.

ParameterDescription
resultA JSON array containing the vectors of the indexed data.

Adding an LLM Generate Embeddings task in UI

To add an LLM Generate Embeddings task:

  1. In your workflow, select the (+) icon and add an LLM Generate Embeddings task.
  2. Choose the LLM provider and Model.
  3. In Text, enter the text to be converted and stored as a vector.

LLM Generate Embeddings Task