Currently accepted at: JMIR Neurotechnology
Date Submitted: Dec 5, 2024
Open Peer Review Period: Sep 16, 2025 - Nov 11, 2025
Date Accepted: Jan 21, 2026
(closed for review but you can still tweet)
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/69708
The final accepted version (not copyedited yet) is in this tab.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Evaluating GPT-4V's Diagnostic Accuracy and Visual Integration in Neuroradiology: A Case-Based Study Using Board-Style Exam Questions Abstract Background The integration of multimodal capabilities in GPT-4V represents advancement in AI's application to clinical fields, particularly neuroradiology. Despite preliminary evidence of capability in medical imaging interpretation, questions remain about its performance in complex scenarios requiring integrated analysis of clinical history and imaging findings. Objective To evaluate GPT-4V's diagnostic performance on neuroradiology board-style multiple-choice questions, integrating both clinical data and medical imaging. Methods Twenty-nine neuroradiology cases from the RSNA Case Collect
ABSTRACT
Background:
The integration of multimodal capabilities in GPT-4V represents advancement in AI's application to clinical fields, particularly neuroradiology. Despite preliminary evidence of capability in medical imaging interpretation, questions remain about its performance in complex scenarios requiring integrated analysis of clinical history and imaging findings.
Objective:
To evaluate GPT-4V's diagnostic performance on neuroradiology board-style multiple-choice questions, integrating both clinical data and medical imaging.
Methods:
Twenty-nine neuroradiology cases from the RSNA Case Collection, each including clinical vignette and CT/MRI images, were presented to GPT-4V. The model evaluated both imaging studies and clinical data, selecting from multiple-choice options while quantifying the relative influence of image versus text data on decision-making.
Results:
GPT-4V achieved 75.86% diagnostic accuracy, with image data contributing an average of 66.9% to final answers. The model relied more heavily on imaging in incorrectly answered cases (75% image-based) compared to correct ones (61.74%).
Conclusions:
The findings suggest potential over-reliance on imaging data in complex cases where clinical context is crucial. Our results highlight the need for improved integration of text and image data in AI models, with future development focusing on refining multimodal decision-making processes to enhance clinical accuracy.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.