Skip to main content

LLM Index Text

A system task to index the provided text into a vector space that can be efficiently searched, retrieved, and processed later.

Definitions

{
"name": "llm_index_text_task",
"taskReferenceName": "llm_index_text_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"embeddingModelProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"text": "${workflow.input.text}",
"docId": "XXXX"
},
"type": "LLM_INDEX_TEXT"
}

Input Parameters

ParameterDescription
vectorDBChoose the required vector database.

Note:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate Vector Databases with Orkes console.
namespaceChoose from the available namespace configured within the chosen vector database.

Namespaces are separate isolated environments within the database to manage and organize vector data effectively.

Note:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.
indexChoose the index in your vector database where indexed text or data was stored.

Note:For Weaviate integration, this field refers to the class name, while in Pinecone integration, it denotes the index name itself.
embeddingModelProviderChoose the required LLM provider for embedding.

If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on how to integrate the LLM providers with Orkes console.
embeddingModelChoose from the available language model for the chosen LLM provider.
textProvide the text to be indexed.
docIdProvide the ID of the document where you need to store the indexed text.

Examples



  1. Add task type LLM Index Text.
  2. Choose the vector database, & LLM provider for embedding the text.
  3. Provide the input parameters.

LLM Index Text Task