Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Apr 8, 2026
Open Peer Review Period: Apr 9, 2026 - Jun 4, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large Language Model–Based Agents for Automated Research Reproducibility: An Exploratory Evaluation Study in Alzheimer’s Disease

  • Nicholas Dobbins; 
  • Christelle Xiong; 
  • Kristine Lan; 
  • Meliha Yetisgen

ABSTRACT

Background:

Reproducibility is a cornerstone of scientific validity, yet many biomedical studies lack sufficient transparency for independent verification. Recent advances in Large Language Models (LLMs) enable the development of autonomous agent systems capable of performing complex research tasks, offering new opportunities to assess and enhance reproducibility at scale.

Objective:

To evaluate the ability of LLM-based autonomous agents to reproduce key findings from published Alzheimer’s disease studies using a shared, publicly available dataset.

Methods:

We used the National Alzheimer’s Coordinating Center Uniform Data Set “Quick Access” dataset. Five eligible studies were identified through citation-based screening and predefined inclusion criteria. We developed a multi-agent system using GPT-4o (Autogen framework), simulating a research team to generate and execute code based on study abstracts, methods, and selected data dictionary variables. Reproducibility was evaluated at the assertion level using extracted abstract findings, with agreement defined by numerical tolerance or directional consistency. We additionally assessed statistical method alignment and overall workflow coherence.

Results:

A total of 35 findings were extracted across 5 studies. LLM agents reproduced a mean of 53.2% of findings, with 3/5 studies achieving majority replication. Agreement was higher for directionality and significance than for numerical estimates. Exact statistical method alignment occurred in 1/5 studies; 8/15 comparisons were partially aligned, mainly for standard methods. Domain-specific methods were often omitted or simplified. Reproduction required iterative correction (mean 35.6 steps), with code errors in 47.2% of runs but resolved autonomously. Failures were primarily due to incomplete reporting and incorrect implementation

Conclusions:

LLM-based autonomous agents demonstrate moderate capability in reproducing published biomedical findings, particularly for studies with clear, well-specified methods. However, reproducibility is limited by incomplete reporting, challenges in implementing domain-specific methods, and breakdowns in multi-step workflow fidelity. These findings suggest that LLM agents may serve as scalable tools for preliminary reproducibility assessment, while emphasizing the need for improved methodological transparency and validation frameworks in biomedical research.


 Citation

Please cite as:

Dobbins N, Xiong C, Lan K, Yetisgen M

Large Language Model–Based Agents for Automated Research Reproducibility: An Exploratory Evaluation Study in Alzheimer’s Disease

JMIR Preprints. 08/04/2026:97652

DOI: 10.2196/preprints.97652

URL: https://preprints.jmir.org/preprint/97652

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.