Skip to main content

LLM Generate Embeddings

A system task to generate embeddings from the input data provided. Embeddings are the processed input text converted into a sequence of vectors, which can then be stored in a vector database for retrieval later. You can use a model that was previously integrated to generate these embeddings.

Definitions

{
"name": "llm_generate_embeddings_task",
"taskReferenceName": "llm_generate_embeddings_task_ref",
"inputParameters": {
"llmProvider": "azure_openai",
"model": "text-davinci-003",
"text": "${workflow.input.text}"
},
"type": "LLM_GENERATE_EMBEDDINGS"
}
],
"inputParameters": [
"text"
]

Input Parameters

ParameterDescription
llmProviderChoose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.

Note:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate the LLM providers with Orkes console and provide access to required groups.
modelChoose from the available language model for the chosen LLM provider. You can only choose models for which you have access.

For example, If your LLM provider is Azure Open AI & you’ve configured text-davinci-003 as the language model, you can choose it under this field.
textProvide the text to be converted and stored as a vector. The text can also be passed as parameters to the workflow.

Output Parameters

The task output is a JSON array containing the vectors of the indexed data.

Examples



  1. Add task type LLM Generate Embeddings.
  2. Choose the LLM provider and language model.
  3. Provide the text to be embedded.

LLM Generate Embeddings Task