Currently submitted to: JMIR Research Protocols
Date Submitted: Dec 30, 2025
Open Peer Review Period: Dec 30, 2025 - Feb 24, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Evaluating the Methodological Quality of Artificial Intelligence–Assisted Systematic Reviews: Protocol for a Mixed Methods Meta-Research Study
ABSTRACT
Background:
Artificial intelligence (AI), including large language models (LLMs), is increasingly integrated into systematic review (SR) workflows. AI tools may accelerate searching, screening, data extraction, and reporting, but their effects on methodological quality, reporting completeness, transparency, and reproducibility remain uncertain. Existing evaluations largely examine isolated tasks, and inconsistent disclosure of AI use limits reproducibility and oversight.
Objective:
This four-phase mixed-methods meta-research study will: (1) compare the methodological quality of AI-assisted versus traditional SRs; (2) refine, finalize, and apply a preliminary AI Transparency and Disclosure Index (AITDI); (3) evaluate reproducibility by comparing outputs across repeated runs of the same AI model, across different AI models, and between AI models and human reviewers at multiple SR stages; and (4) explore knowledge user perspectives on rigor, transparency, and trust in AI-assisted SR.
Methods:
We will conduct a matched cohort analysis of SRs published from 2023–2025 in biomedical journals. Each AI-assisted SR will be matched 1:2 with traditional SRs by publication year, clinical domain, review type, and meta-analysis status. Two independent reviewers will apply AMSTAR-2 (methodological quality), PRISMA 2020 (reporting completeness), and, when applicable, ROBIS (risk-of-bias rigor). A preliminary AITDI will be refined and then applied to all AI-assisted SRs. Reproducibility will be assessed using SR-derived tasksets to compare outputs across repeated runs of the same model, across different models, and between AI and human reviewers at key SR stages. Semi-structured interviews with authors, editors, clinicians, policymakers, and patient partners will be analyzed using reflexive thematic analysis.
Results:
As of December 2025, the study has been preregistered on OSF (DOI: 10.17605/OSF.IO/Q5JRW), the search strategy has been finalized, and title/abstract screening has begun. Data extraction is planned for March–May 2026, followed by AITDI refinement and reproducibility testing from May–October 2026. Qualitative interviews are anticipated from October 2026–February 2027, with final analyses by April 2027 and dissemination planned for mid-2027.
Conclusions:
This study will provide one of the first empirical comparisons of methodological quality, transparency, and reproducibility of AI-assisted versus traditional SRs in the LLM era. Findings will inform expectations for responsible AI integration and support refinement of reporting and methodological best practices, including future development of AI-specific reporting and appraisal extensions (e.g., PRISMA-LLM, AMSTAR-LLM). Clinical Trial: N/A
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.