LLM Store Embeddings
The LLM Store Embeddings task is used to store the generated embeddings produced by the LLM Generate Embeddings task in a vector database. The stored embeddings serve as a repository of information that can be later accessed by the LLM Get Embeddings task for efficient and quick retrieval of related data.
The LLM Store Embeddings task takes the embeddings generated by the LLM Generate Embeddings task and stores them in a specified vector database. This involves specifying parameters such as the vector database provider, index, namespace, and embedding model details. The task ensures the embeddings are organized and accessible for future retrieval operations.
Task parameters
Configure these parameters for the LLM Store Embeddings task.
Parameter | Description | Required/Optional |
---|---|---|
inputParameters.vectorDB | The vector database to store the data. Note: If you haven’t configured the vector database on your Orkes Conductor cluster, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate Vector Databases with Orkes console. | Required. |
inputParameters.index | The index in your vector database where the text or data will be stored. The terminology of the index field varies depending on the integration:
| Required. |
inputParameters.namespace | Namespaces are separate isolated environments within the database to manage and organize vector data effectively. Choose from the available namespace configured within the chosen vector database. The usage and terminology of the namespace field vary depending on the integration:
| Required. |
inputParameters.embeddingModelProvider | The LLM provider used for generating the embeddings. Note: If you haven’t configured your AI/LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate the LLM providers with Orkes Conductor. | Required. |
inputParameters.embeddingModel | The embedding model provided by the selected LLM provider to generate the embeddings. | Required. |
inputParameters.id | An arbitrary vector ID to identify the vector in the database. | Optional. |
Task configuration
This is the task configuration for an LLM Store Embeddings task.
{
"name": "llm_store_embeddings",
"taskReferenceName": "llm_store_embeddings_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"index": "test",
"namespace": "myNewModel",
"embeddingModelProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"id": "xxxxxx"
},
"type": "LLM_STORE_EMBEDDINGS"
}
Task output
There is no output. The LLM Store Embeddings task will store the embeddings in the specified vector database.
Adding an LLM Store Embeddings task in UI
To add an LLM Store Embeddings task:
- In your workflow, select the (+) icon and add an LLM Store Embeddings task.
- Choose the Vector database, Index, and Namespace to store the embeddings.
- Choose the Embedding model provider and Embedding model used to generate the embeddings.
- In Vector ID, enter an arbitrary ID to identify the vector in the database.