Skip to main content

Agentic Research Assistant

note

This template is only available on the Developer Playground in Launch Pad.

This agentic research template generates either a literature review or a research gap report based on the user query. Powered by LLMs, the workflow identifies the user’s intent from the query and compiles the latest research findings with search grounding.

An agentic research assistant can speed up the discovery process in academic research. Within minutes, researchers can gain a deeper understanding of a specific field, freeing up time to design their research thesis.

This template serves as a quickstart for building agentic workflows. You can use this as a basis to understand the design patterns involved in an agentic workflow and extend it with your own data sources and control flow.

Conductor features used

This template utilizes the following Conductor features:

How to use the template

  1. Import the template from Launch Pad.
  2. Configure AI/LLM integrations and models.
  3. Configure AI prompts.
  4. Set up a worker for report generation.
  5. Run workflow.
Prerequisites

The agentic research template uses AI models. Ensure that you have API keys for the following model providers:

Step 1: Import the template

The agentic research template is only available on the Developer Playground in Launch Pad.

To import the template:

  1. Log in to Developer Playground.
  2. Go to Launch Pad on the left navigation menu.
  3. Select Agentic Research > View Template.

The agentic_research workflow is now imported and ready for use.

Agentic Research Workflow.

Understand the workflow logic

This section explains the workflow logic and how to execute it.

AI components:

There are five LLM Text Complete tasks in the workflow. Using LLMs, each LLM Text Complete task contributes either to a decision-making node (agentic) or to generating content:

  • determine_research_task — Determines what task to conduct in the workflow (research-gap, literature-review, both, or none) based on the user’s research question.
  • break_into_subtopics — Identify research sub-topics based on the user’s research question.
  • lit_review_task (used in a Dynamic Fork) — Conduct literature review research.
  • research_gap_task (used in a Dynamic Fork) — Conducts a research gap analysis.
  • compile_subtopics_response — Synthesizes a compiled report based on the lit_review_task and/or research_gap_task.

Workflow inputs:

  • question—The research question (“What are the latest findings in neutron stars and what is still unknown?”)
  • filename—The file name for the generated .pdf report. (“myReport.pdf”)

Workflow logic:

The workflow begins with a Set Variable task that initializes global workflow variables for convenient retrieval later:

  • answer—the final report content, which is an empty string for now.
  • question—the user’s initial query.
  • lit_reviews—the research from the lit_review_task (an empty array for now).
  • research_gaps—the research from the research_gap_task (an empty array for now).

With the variables declared, the workflow proceeds to identify what task to do next and the list of research sub-topics based on the user’s query using separate LLM Text Complete tasks. A Fork/Join is used to process these tasks in parallel.

Another Set Variable task is used to declare additional global workflow variables based on the LLM evaluation:

  • subtopics—the list of sub-topics that will be used for the research.
  • decision—the selected task(s) that the workflow will carry out (research-gap, literature-review, both, or none).

If the decision is none, meaning the user query is irrelevant, the workflow will terminate with a Terminate task.

Otherwise, the workflow will continue to a Switch task, which routes to the relevant [Set Variable] task that prepares the exact configuration of the task(s) to be carried out later. This routing step is in preparation for the Dynamic Fork task later on, which requires the task configuration as input.

Next, a Do While task is used to iterate through each sub-topic and generate its research findings. Inside the loop:

  • The Inline task is used to track the iteration counter and set the relevant sub-topic.
  • The Dynamic Fork will dynamically call the task(s) set during the prior Switch task.
    • The task(s) serve as the core research generation step carried out by LLM Text Complete tasks, which are powered by Perplexity models with web search access.
    • If both a literature review and a research gap report are required, the Dynamic Fork will generate two forks to conduct research for both areas in parallel. Likewise, if only one task is required, the Dynamic Fork will generate one fork.
  • Another Switch task and Set Variable task combination is used to concatenate each sub-topic’s research findings into a single array, which can be accessed and updated with the previously-declared lit_reviews and/or research_gaps variables.

Finally, once the research is completed, another LLM Chat Complete task is used to synthesize the sub-topics’ findings into a formatted report that answers the user’s original question. A custom Worker task is used to generate a .pdf file that downloads directly to the user’s desktop for convenient access.

Step 2: Configure AI/LLM integrations and models

To use the LLM Text Complete tasks, you must set up the AI/LLM integrations with the relevant model providers. The integrations required for the agentic_research workflow have already been imported during Step 1:

  • openAI–used for determine_research_task for break_into_subtopics
  • perplexity—used for lit_review_task (used in a Dynamic Fork) and research_gap_task
  • AnthropicClaude—used for compile_subtopics_response

To finish setting up the AI/LLM integrations:

  1. From the agentic_research workflow definition, go to the Dependencies tab.
  2. In the Integrations section, select each integration to provide the API Key, then select Save.

Screenshot of workflow integrations in Conductor UI.

Once completed, you can proceed to add specific models for each integration.

To add a model to your integration:

  1. Go to Integrations on the left navigation menu.

  2. For each integration (openAI, perplexity, AnthropicClaude), select the + icon (Add/Edit models), then select + New model.

    Screenshot of integrations list in Conductor UI.

  3. Add the following Model name for the corresponding integration:

    • openAI–gpt-4o
    • perplexity—sonar
    • AnthropicClaude—claude-3-7-sonnet-20250219
  4. Ensure that the Active toggle is switched on and select Save.

The AI/LLM models are now ready to use.

Step 3: Configure AI prompts

To use the LLM Text Complete tasks, you must set up the AI prompts. The five prompts required for the agentic_research workflow have already been imported during Step 1:

The prompt determines subsequent tasks (research-gap, literature-review, both or none) based on the user's query.


You are an academic research agent.

Given the user's request, identify what kind of research task they want to perform:

Options:

- literature-review - if they are asking for a summary of existing knowledge

- research-gap - if they want to identify what is still unknown or under-researched

- both - if they want both a literature review and to find gaps

- none - if the query doesn't request any research

User query: "${user-query}"

Output only one of: "literature-review", "research-gap", "both", or "none"

Ex. 1

Query: What are the latest findings in child development psychology?

Output: literature-review

Ex. 2

Query: What is still unknown about neutron stars?

Output: research-gap

Ex. 3

Query: What do we know and don't know about black holes?

Output: both

Ex. 4

Query: Write me a Haiku!

Output: none

To finish setting up the AI prompts:

  1. Go to Definitions > Workflow and select the agentic_research workflow.
  2. Go to the Dependencies tab.
  3. In the Prompts section, select each AI prompt and add the associated Model(s), then select Save:
    • break_into_subtopics—openAI:gpt-4o
    • query_task_decision—openAI:gpt-4o
    • literature_review_task—perplexity:sonar
    • research_gap_task—perplexity:sonar
    • compile_subtopic_responses—AnthropicClaude:claude-3-7-sonnet-20250219

Screenshot of workflow prompts in Conductor UI.

Step 4: Set up custom worker

The final .pdf report is created using a custom Worker task that takes the raw report content and generates a file based on the supplied file name. To use the task, you need to set up a worker locally and connect it to the Conductor server with access credentials.

To retrieve the access credentials:

  1. Go to Definitions > Workflow and select the agentic_research workflow.

  2. In the Workflow tab on the right-hand panel, select Get Access Keys.

    Screenshot of Conductor UI where you can get the access keys.

  3. Copy the key ID, secret, and server URL and store them securely.

To set up the worker:

  1. Copy the following command into your terminal:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/conductor-oss/awesome-conductor-apps/refs/heads/agent_research_fix_enh/python/agentic_research/workers/install.sh)"
  2. Enter the access key ID, secret, and server URL into the terminal when prompted.

Step 5: Run the workflow

With the workflow fully set up, give it a run.

To run the workflow:

  1. From the agentic_research workflow definition, go to the Run tab.

  2. Enter the Input Params.

    Example:

    {
    "question": "What is the latest updates on cancer research in 2025?",
    "filename": "latest-cancer-research-updates.pdf"
    }
  3. Select Execute.

Run the agen6tic research workflow in Conductor UI.

Workflow output

Once the workflow is completed, a .pdf file of the research findings will be downloaded onto your local desktop. You can locate it in /awesome-conductor-apps/python/agentic_research/workers. Open the file to review the research report.

Example PDF file created by the workflow.

Workflow modifications

This template provides a starting point for customizing the workflow to your needs. You can easily swap out the AI models or modify the AI prompts for better results.

note

If you want to switch out the AI models, you must modify the workflow LLM Text Complete task and its corresponding AI prompt before running the workflow.