Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Medical Informatics

Date Submitted: Nov 10, 2025
Open Peer Review Period: Nov 25, 2025 - Jan 20, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Improving Radiology Report Error Detection Using a Multi-Pass LLM Framework

  • Songsoo Kim; 
  • Seungtae Lee; 
  • See Young Lee; 
  • Joonho Kim; 
  • Keechan Kan; 
  • Dukyong Yoon

ABSTRACT

Background:

Large language model (LLM) proofreaders for radiology reports generate many false positives (FP) due to the low prevalence of errors.

Objective:

This study aimed to determine whether an optimized LLM framework could improve both precision and cost-efficiency without compromising error detection capability.

Methods:

In this retrospective study, 1,000 radiology reports (radiography, ultrasonography, CT, and MRI; 250 each) were sampled from the Medical Information Mart for Intensive Care III (MIMIC-III) database. Two public chest radiography corpora (CheXpert and Open-i) served as external test sets. Three LLM frameworks were evaluated: single-prompt detector (Framework 1); report extractor plus single-prompt detector (Framework 2); and extractor, detector, and false positive verifier (Framework 3). Precision for each framework was assessed using positive predictive value (PPV) and detected errors per 1,000 reports (DE/1k). Overall efficiency was estimated using model inference computational costs.

Results:

PPV increased from 0.063 [95% CI, 0.036–0.101] in Framework 1 to 0.079 (0.049–0.118) in Framework 2 and 0.159 (0.090–0.252) in Framework 3 (P<.001). Despite improved PPV, detected errors remained stable (DE/1k: 12–14). Human review burden decreased from 192 to 88 reports. Framework 3 also reduced costs to $5.58 per 1,000 reports (vs $9.72 and $6.85 for Frameworks 1 and 2; 42.6% and 18.5% reductions). External validation confirmed similar improvements.

Conclusions:

A three-pass LLM framework more than doubled precision and halved the cost of radiology report error detection without compromising error detection capability, offering sustainable strategies for AI-assisted quality assurance in both radiological practice and research.


 Citation

Please cite as:

Kim S, Lee S, Lee SY, Kim J, Kan K, Yoon D

Improving Radiology Report Error Detection Using a Multi-Pass LLM Framework

JMIR Preprints. 10/11/2025:87368

DOI: 10.2196/preprints.87368

URL: https://preprints.jmir.org/preprint/87368

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.