Skip to main content

LLM Search Index

A system task to search the vector database or repository of vector embeddings of already processed and indexed documents to get the closest match. You can input a query that typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database.

For example, in a recommendation system, a user might issue a query to find products similar to one they've recently purchased. The query vector would represent the purchased product, and the database would return a list of products with similar vectors, which are likely to be related or recommended to the user.

Definitions

{
"name": "llm_search_index_task",
"taskReferenceName": "llm_search_index_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"llmProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"query": "What is an LLM model?"
},
"type": "LLM_SEARCH_INDEX"
}

Input Parameters

ParameterDescription
vectorDBChoose the required vector database.

Note:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate Vector Databases with Orkes console..
namespaceChoose from the available namespace configured within the chosen vector database.

Namespaces are separate isolated environments within the database to manage and organize vector data effectively.

Note: The namespace field has different names and applicability based on the integration:
  • For Pinecone integration, the namespace field is applicable.
  • For Weaviate integration, the namespace field is not applicable.
  • For MongoDB integration, the namespace field is referred to as “Collection” in MongoDB.
  • For Postgres integration, the namespace field is referred to as “Table” in Postgres.
indexChoose the index in your vector database where indexed text or data was stored.

Note: For Weaviate integration, this field refers to the class name, while for other integrations, it denotes the index name.
embeddingModelProviderChoose the required LLM provider for embedding.

Note: If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate the LLM providers with Orkes console.
embeddingModelChoose from the available language models provided by the selected LLM provider.
queryProvide your search query. A query typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database.

Output Parameters

ParameterDescription
resultA JSON array containing the results of the query.
scoreRepresents a value that quantifies the degree of likeness between a specific item and a query vector, facilitating the ranking and ordering of results. Higher scores denote a stronger resemblance or relevance of a data point to the query vector.
metadataAn object containing additional metadata related to the retrieved document.
docIdDisplays the unique identifier of the document queried.
parentDocIdAnother identifier that might denote a parent document in hierarchical or relational data structures.
textActual content of the document retrieved.

Examples



  1. Add task type LLM Search Index.
  2. Choose the vector database, & LLM provider.
  3. Provide the search query.

LLM Search Index Task