LLM Store Embeddings
A system task responsible for storing the generated embeddings produced by the LLM Generate Embeddings task, into a vector database. The stored embeddings serve as a repository of information that can be later accessed by the LLM Get Embeddings task for efficient and quick retrieval of related data.
Definitions
{
"name": "llm_store_embeddings",
"taskReferenceName": "llm_store_embeddings_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"index": "test",
"namespace": "myNewModel",
"embeddingModelProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"id": "xxxxxx"
},
"type": "LLM_STORE_EMBEDDINGS"
}
Input Parameters
Parameter | Description |
---|---|
vectorDB | Choose the vector database to which the data is to be stored. Note: If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate Vector Databases with Orkes console. |
index | Choose the index in your vector database where the text or data is to be stored. Note: For Weaviate integration, this field refers to the class name, while in Pinecone integration, it denotes the index name itself. |
namespace | Choose from the available namespace configured within the chosen vector database. Namespaces are separate isolated environments within the database to manage and organize vector data effectively. Note: Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration. |
embeddingModelProvider | Choose the required LLM provider for embedding. Note:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate the LLM providers with Orkes console. |
embeddingModel | Choose from the available language model for the chosen LLM provider. |
Id | Optional field to provide the vector ID. |
Examples
- UI
- JSON Example
- Add task type LLM Store Embeddings.
- Choose the vector database for storing the embeddings.
- Provide the input parameters.
{
"name": "llm_store_embeddings",
"taskReferenceName": "llm_store_embeddings_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"index": "test",
"namespace": "myNewModel",
"embeddingModelProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"id": "xxxxxx"
},
"type": "LLM_STORE_EMBEDDINGS"
}