Accepted for/Published in: JMIR Research Protocols
Date Submitted: Dec 30, 2025
Open Peer Review Period: Dec 30, 2025 - Feb 24, 2026
Date Accepted: Mar 17, 2026
(closed for review but you can still tweet)
Evaluating the Methodological Quality of Artificial Intelligence–Assisted Systematic Reviews: Protocol for a Mixed Methods Meta-Research Study
ABSTRACT
Background:
Artificial intelligence (AI), including large language models (LLMs), is increasingly integrated into systematic review (SR) workflows. AI tools may accelerate searching, screening, data extraction, and reporting, but their effects on methodological quality, reporting completeness, transparency, and reproducibility remain uncertain. Existing evaluations largely examine isolated tasks, and inconsistent disclosure of AI use limits reproducibility and oversight.
Objective:
This four-phase mixed-methods meta-research study will: (1) compare the methodological quality of AI-assisted versus traditional SRs; (2) refine, finalize, and apply a preliminary AI Transparency and Disclosure Index (AITDI); (3) evaluate reproducibility by comparing outputs across repeated runs of the same AI model, across different AI models, and between AI models and human reviewers at multiple SR stages; and (4) explore knowledge user perspectives on rigor, transparency, and trust in AI-assisted SR.
Methods:
We will conduct a matched cohort analysis of SRs published from 2023–2025 in biomedical journals. Each AI-assisted SR will be matched 1:2 with traditional SRs by publication year, clinical domain, review type, and meta-analysis status. Two independent reviewers will apply AMSTAR-2 (methodological quality), PRISMA 2020 (reporting completeness), and, when applicable, ROBIS (risk-of-bias rigor). A preliminary AITDI will be refined and then applied to all AI-assisted SRs. Reproducibility will be assessed using SR-derived tasksets to compare outputs across repeated runs of the same model, across different models, and between AI and human reviewers at key SR stages. Semi-structured interviews with authors, editors, clinicians, policymakers, and patient partners will be analyzed using reflexive thematic analysis.
Results:
As of December 2025, the study has been preregistered on OSF (DOI: 10.17605/OSF.IO/Q5JRW), the search strategy has been finalized, and title/abstract screening has begun. Data extraction is planned for March–May 2026, followed by AITDI refinement and reproducibility testing from May–October 2026. Qualitative interviews are anticipated from October 2026–February 2027, with final analyses by April 2027 and dissemination planned for mid-2027.
Conclusions:
This study will provide one of the first empirical comparisons of methodological quality, transparency, and reproducibility of AI-assisted versus traditional SRs in the LLM era. Findings will inform expectations for responsible AI integration and support refinement of reporting and methodological best practices, including future development of AI-specific reporting and appraisal extensions (e.g., PRISMA-LLM, AMSTAR-LLM). Clinical Trial: N/A
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.