Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Jul 14, 2024
Date Accepted: Dec 3, 2024

The final, peer-reviewed published version of this preprint can be found here:

Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis

Wei B

Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis

JMIR Med Educ 2025;11:e64284

DOI: 10.2196/64284

PMID: 39819381

PMCID: 11756834

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Contrasting the performance of mainstream Large Language Models in Radiology Board Examinations

  • Boxiong Wei

ABSTRACT

Background:

Artificial Intelligence advancements have enabled Large Language Models to significantly impact radiology education and diagnostic accuracy.

Objective:

This study evaluates the performance of mainstream Large Language Models, including GPT-4, Claude, Bard, Tongyi Qianwen, and Gemini Pro, in radiology board exams.

Methods:

A comparative analysis of 150 multiple-choice questions from radiology board exams without images was conducted. Models were assessed on accuracy in text-based questions categorized by cognitive levels and medical specialties using chi-square tests and ANOVA.

Results:

GPT-4 achieved the highest accuracy (83.3%), significantly outperforming others. Tongyi Qianwen also performed well (70.7%). Performance varied across question types and specialties, with GPT-4 excelling in both lower-order and higher-order questions, while Claude and Bard struggled with complex diagnostic questions.

Conclusions:

GPT-4 and Tongyi Qianwen show promise in medical education and training. The study emphasizes the need for domain-specific training datasets to enhance large models' effectiveness in specialized fields like radiology.


 Citation

Please cite as:

Wei B

Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis

JMIR Med Educ 2025;11:e64284

DOI: 10.2196/64284

PMID: 39819381

PMCID: 11756834

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.