Skip to main content

LLM Search Index

The LLM Search Index task is used to search a vector database or repository of vector embeddings of already processed and indexed documents to get the closest match. This task is typically used in scenarios where you need to retrieve or manipulate data stored in a database using a natural language query.

The LLM Search Index task takes a query, which can be a question, statement, or request made in natural language. This query is processed to generate a vector representation, which is then used to search the vector database. The task returns a list of documents with vectors similar to the query vector, providing the closest matches based on the degree of similarity.

Task parameters

Configure these parameters for the LLM Search Index task.

ParameterDescriptionRequired/Optional
inputParameters.vectorDBThe vector database to retrieve the data.

Note: If you haven’t configured the vector database on your Orkes Conductor cluster, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate Vector Databases with Orkes console.
Required.
inputParameters.indexThe index in your vector database where the text or data will be stored.

The terminology of the index field varies depending on the integration:
  • For Weaviate, the index field indicates the class name.
  • For other integrations, it denotes the index name.
Required.
inputParameters.namespaceNamespaces are separate isolated environments within the database to manage and organize vector data effectively. Choose from the available namespace configured within the chosen vector database.

The usage and terminology of the namespace field vary depending on the integration:
  • For Pinecone, the namespace field is applicable.
  • For Weaviate, the namespace field is not applicable.
  • For MongoDB, the namespace field is referred to as “Collection” in MongoDB.
  • For Postgres, the namespace field is referred to as “Table” in Postgres.
Required.
inputParameters.embeddingModelProviderThe LLM provider for the embeddings.

Note: If you haven’t configured your AI/LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate the LLM providers with Orkes Conductor.
Required.
inputParameters.embeddingModelThe embedding model provided by the selected LLM provider.Required.
inputParameters.queryThe search query. A query typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database.Required.

Task configuration

This is the task configuration for an LLM Search Index task.

{
"name": "llm_search_index_task",
"taskReferenceName": "llm_search_index_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"llmProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"query": "What is an LLM model?"
},
"type": "LLM_SEARCH_INDEX"
}

Task output

The LLM Search Index task will return the following parameters.

ParameterDescription
resultA JSON array containing the results of the query.
scoreRepresents a value quantifying the degree of likeness between a specific item and a query vector, facilitating ranking and ordering of results. Higher scores denote stronger relevance to the query vector.
metadataAn object containing additional metadata related to the retrieved document.
docIdThe unique identifier of the queried document.
parentDocIdAn identifier that denotes a parent document in hierarchical or relational data structures.
textThe actual content retrieved.

Adding an LLM Search Index task in UI

To add an LLM Search Index task:

  1. In your workflow, select the (+) icon and add an LLM Search Index task.
  2. Choose the Vector database, Index, Namespace, Embedding model provider, and Embedding model.
  3. In Query, enter the text to be queried.

LLM Search Index Task - UI