LLM Search Index
A system task to search the vector database or repository of vector embeddings of already processed and indexed documents to get the closest match. You can input a query that typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database.
For example, in a recommendation system, a user might issue a query to find products similar to one they've recently purchased. The query vector would represent the purchased product, and the database would return a list of products with similar vectors, which are likely to be related or recommended to the user.
Definitions
{
"name": "llm_search_index_task",
"taskReferenceName": "llm_search_index_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"llmProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"query": "What is an LLM model?"
},
"type": "LLM_SEARCH_INDEX"
}
Input Parameters
Parameter | Description |
---|---|
vectorDB | Choose the required vector database. Note:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate Vector Databases with Orkes console.. |
namespace | Choose from the available namespace configured within the chosen vector database. Namespaces are separate isolated environments within the database to manage and organize vector data effectively. Note: The namespace field has different names and applicability based on the integration:
|
index | Choose the index in your vector database where indexed text or data was stored. Note: For Weaviate integration, this field refers to the class name, while for other integrations, it denotes the index name. |
embeddingModelProvider | Choose the required LLM provider for embedding. Note: If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to the documentation on how to integrate the LLM providers with Orkes console. |
embeddingModel | Choose from the available language models provided by the selected LLM provider. |
query | Provide your search query. A query typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database. |
Output Parameters
Parameter | Description |
---|---|
result | A JSON array containing the results of the query. |
score | Represents a value that quantifies the degree of likeness between a specific item and a query vector, facilitating the ranking and ordering of results. Higher scores denote a stronger resemblance or relevance of a data point to the query vector. |
metadata | An object containing additional metadata related to the retrieved document. |
docId | Displays the unique identifier of the document queried. |
parentDocId | Another identifier that might denote a parent document in hierarchical or relational data structures. |
text | Actual content of the document retrieved. |
Examples
- UI
- JSON
- Add task type LLM Search Index.
- Choose the vector database, & LLM provider.
- Provide the search query.
{
"name": "llm_search_index_task",
"taskReferenceName": "llm_search_index_task_ref",
"inputParameters": {
"vectorDB": "pineconedb",
"namespace": "myNewModel",
"index": "test",
"llmProvider": "azure_openai",
"embeddingModel": "text-davinci-003",
"query": "What is an LLM model?"
},
"type": "LLM_SEARCH_INDEX"
}