Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Comparative Diagnostic Performance of a Multimodal Large Language Model (ChatGPT) versus a Dedicated ECG AI (ECG Buddy) in Detecting Myocardial Infarction from ECG Images
ABSTRACT
Background:
Accurate and timely electrocardiogram (ECG) interpretation is critical for diagnosing myocardial infarction (MI) in emergency settings. Recent advances in multimodal Large Language Models (LLMs), such as Chat Generative Pre-trained Transformer (ChatGPT), have shown promise in clinical interpretation for medical imaging. However, whether these models analyze waveform patterns or simply rely on text cues remains unclear, underscoring the need for direct comparisons with dedicated ECG artificial intelligence (AI) tools.
Objective:
This study aimed to evaluate the diagnostic performance of ChatGPT, a general-purpose LLM, in detecting MI from ECG images and to compare its performance with that of ECG BuddyTM, a dedicated AI-driven ECG analysis tool.
Methods:
This retrospective study evaluated and compared AI models for classifying MI using a publicly available 12-lead ECG dataset from Pakistan, categorizing cases into MI-positive (239 images) and MI-negative (689 images). ChatGPT (GPT-4o, version 2024-11-20) was queried with five MI confidence options, whereas ECG Buddy for Windows analyzed the images based on ST-elevation MI, acute coronary syndrome, and myocardial injury biomarkers.
Results:
Among 928 ECG recordings (25.8% MI-positive), ChatGPT achieved an accuracy of 65.95% (95% confidence interval [CI]: 62.80–69.00), area under the curve (AUC) of 57.34% (95% CI: 53.44–61.24), sensitivity of 36.40% (95% CI: 30.30–42.85), and specificity of 76.20% (95% CI: 72.84–79.33). However, ECG Buddy reached an accuracy of 96.98% (95% CI: 95.67–97.99), AUC of 98.8% (95% CI: 98.3–99.43), sensitivity of 96.65% (95% CI: 93.51–98.54), and specificity of 97.10% (95% CI: 95.55–98.22). DeLong’s test confirmed that ECG Buddy significantly outperformed ChatGPT (all P < .001). In an error analysis of 40 cases, ChatGPT provided clinically plausible explanations in only 7.5% of cases, whereas 35% were partially correct, 40% were completely incorrect, and 17.5% received no meaningful explanation.
Conclusions:
LLMs such as ChatGPT underperform relative to specialized tools such as ECG Buddy in ECG image-based MI diagnosis. Further training may improve ChatGPT; however, domain-specific AI remains essential for clinical accuracy. The high performance of ECG Buddy underscores the importance of specialized models for achieving reliable and robust diagnostic outcomes.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.