Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Aug 1, 2025
Date Accepted: Nov 3, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Artificial intelligence tools for automating evidence synthesis: A scoping review
ABSTRACT
Background:
Rapidly and accurately synthesizing large volumes of evidence is a time and resource-intensive process. Once published, reviews often risk becoming outdated, limiting their usefulness for decision-makers. Recent advancements in artificial intelligence (AI) have enabled researchers to automate various stages of the evidence synthesis process, from literature searching and screening to data extraction.
Objective:
We aimed to map the current landscape of AI tools used to automate evidence synthesis.
Methods:
Following the JBI methodology for scoping reviews, we searched Ovid MEDLINE, Ovid Embase, Scopus, and Web of Science in February 2025, and conducted a grey literature search in April 2025. We included articles published in any language from January 2021 onwards. Two reviewers independently screened citations using Rayyan, and we extracted data based on study design and key AI-related technical features.
Results:
We identified 7,841 unique citations through database searches and 19 additional records through a grey literature search. A total of 222 articles were included in the review. We identified 65 AI tools that automate either specific tasks or the entire evidence synthesis process. More than half of the included studies were published in 2024, reflecting a trend in the use of general-purpose large language models (LLMs) for evidence synthesis. Title and abstract screening, as well as data extraction, were the most studied tasks for automation.
Conclusions:
A broad, evolving suite of AI tools is available to support automation in evidence synthesis, leveraged by increasingly complex AI methods. Optimal tool selection likely will depend on the review topic, researcher priorities, and specific tasks. While these tools offer potential for reducing manual workload, ongoing evaluation to mitigate AI bias, and ensure quality and integrity of reviews, is essential for safeguarding evidence-based decision-making.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.