Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Dec 27, 2023
Date Accepted: Sep 6, 2024

The final, peer-reviewed published version of this preprint can be found here:

Evaluation of RMES, an Automated Software Tool Utilizing AI, for Literature Screening with Reference to Published Systematic Reviews as Case-Studies: Development and Usability Study

Sugiura A, Saegusa S, Jin Y, Yoshimoto R, Smith ND, Dohi K, Higuchi T, Kozu T

Evaluation of RMES, an Automated Software Tool Utilizing AI, for Literature Screening with Reference to Published Systematic Reviews as Case-Studies: Development and Usability Study

JMIR Form Res 2024;8:e55827

DOI: 10.2196/55827

PMID: 39652380

PMCID: 11667133

Evaluation of Rapid Medical Evidence Synthesis (RMES), an Automated Software Tool Utilizing Artificial Intelligence, for Literature Screening with Reference to Published Systematic Reviews as Case-Studies

  • Ayaka Sugiura; 
  • Satoshi Saegusa; 
  • Yingzi Jin; 
  • Riki Yoshimoto; 
  • Nicholas D. Smith; 
  • Koji Dohi; 
  • Tadashi Higuchi; 
  • Tomotake Kozu

ABSTRACT

Background:

Systematic reviews and meta-analyses are important to evidence-based medicine (EBM), but the information retrieval and literature screening procedures are time-consuming tasks. Rapid Medical Evidence Synthesis (RMES) is a software designed to support information retrieval, literature screening, and data extraction for EBM.

Objective:

Our objective was to evaluate the accuracy of RMES for literature screening with reference to published systematic reviews.

Methods:

We used RMES to automatically screen the titles and abstracts of PubMed-indexed articles included in 12 systematic reviews across six medical fields, by applying four filters: (1) study type; (2) study type + disease; (3) study type + intervention; and (4) study type + disease + intervention. We determined the numbers of articles correctly included by each filter relative to those included by the authors of each systematic review. Only PubMed-indexed articles were assessed.

Results:

Across the 12 reviews, the number of articles analyzed by RMES ranged from 46 to 5612. The number of PubMed-cited articles included in the reviews ranged from 4 to 47. The median (range) percentage of articles correctly labeled by RMES using filters 1–4 were: 80.9% (57.1%–100.0%), 65.2% (34.1%–81.8%), 70.5% (0.0%–100.0%), and 58.6% (0.0%–81.8%), respectively.

Conclusions:

This study demonstrated good performance and accuracy of RMES for the initial screening of the titles and abstracts of articles for use in systematic reviews. RMES has the potential to reduce the workload and time involved in the initial screening of published studies.


 Citation

Please cite as:

Sugiura A, Saegusa S, Jin Y, Yoshimoto R, Smith ND, Dohi K, Higuchi T, Kozu T

Evaluation of RMES, an Automated Software Tool Utilizing AI, for Literature Screening with Reference to Published Systematic Reviews as Case-Studies: Development and Usability Study

JMIR Form Res 2024;8:e55827

DOI: 10.2196/55827

PMID: 39652380

PMCID: 11667133

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.