Currently submitted to: Journal of Medical Internet Research
Date Submitted: Feb 20, 2026
Open Peer Review Period: Feb 21, 2026 - Apr 18, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Methodological Considerations for Conducting an AI-Assisted Systematic Literature Review: The Importance of a Human-in-the-Loop Approach to Maintain Scientific Rigor
ABSTRACT
Systematic literature reviews (SLRs) are critical for evidence synthesis and play a central role in supporting research, policy development, and evidence‑based decision‑making across healthcare and related disciplines. However, traditional SLRs are resource-intensive and time-consuming, often requiring months of manual effort to screen thousands of records, extract data, and maintain methodological rigor. As global scientific output continues to grow exponentially, these operational challenges have intensified, contributing to longer completion timelines, greater workforce burden, and a heightened risk that reviews become outdated shortly after publication. In response, there is growing interest in artificial intelligence (AI) approaches to enhance the efficiency and scalability of SLRs. AI tools now offer support for a broad range of review tasks, including assisting with search strategy development, identifying relevant concepts, prioritizing records for screening, and supporting data extraction and risk‑of‑bias assessments. AI can significantly accelerate labor-intensive stages, reduce human error during repetitive tasks, and enable the synthesis of evidence bases that might otherwise be impractical to review manually. However, these efficiencies must be balanced against the risks associated with AI, including bias, lack of transparency, variable outputs, and hallucinations (outputs that appear plausible but are factually incorrect). A human-in-the-loop approach is therefore essential to validate AI outputs and maintain scientific integrity. Human expertise remains critical for defining research questions, validating search strategies, confirming study eligibility, interpreting nuanced data, and making final judgments on quality and risk of bias. Clear methodological guidance is required to support teams in integrating AI tools responsibly, transparently, and reproducibly into SLR workflows. Methodological considerations include selecting appropriate tools, defining oversight strategies, and applying performance metrics such as precision and recall. This paper aims to provide methodological guidance on the effective integration of AI into each stage of the SLR process, drawing on both published literature and the authors’ real-world experience. We outline key considerations for selecting and implementing AI tools while maintaining human oversight. We also discuss how to maintain transparency, auditability, and alignment with established standards, including PRISMA‑P, PRISMA‑trAIce, and emerging guidance from regulators and health technology assessment bodies. We also present future directions for responsible AI use in SLRs. AI should complement, not replace, human judgment. When implemented within a human-in-the-loop framework, AI has the potential to accelerate evidence synthesis, enabling faster, scalable, and rigorous reviews while preserving transparency and reproducibility.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.