Accepted for/Published in: JMIR Cardio
Date Submitted: Apr 29, 2025
Open Peer Review Period: May 5, 2025 - Jun 30, 2025
Date Accepted: Jan 19, 2026
(closed for review but you can still tweet)
Large Language Models in Cardiology: A Systematic Review
ABSTRACT
Background:
Large language models (LLMs) are increasingly used in health care, but their role in cardiology has not been systematically evaluated.
Objective:
This review aimed to assess the applications, performance and limitations of LLMs across diverse cardiology tasks, including chronic and progressive conditions, acute events, education and diagnostic testing.
Methods:
A systematic search was conducted in PubMed and Scopus for studies published up to April 14, 2024, using keywords related to LLMs and cardiology. Inclusion criteria were original studies assessing LLM applications in cardiology, reviews, commentaries and editorials were excluded. Eligible studies included original research evaluating the use of LLMs in cardiology. The risk of bias was assessed with the QUADAS-2 tool. The review protocol was registered in PROSPERO (CRD42024556397).
Results:
A total of 29 studies were identified for inclusion. Of these, 27 were included in the quantitative synthesis and 2 in the qualitative synthesis. The Studies were grouped into five categories: chronic and progressive cardiac conditions (11 studies), acute cardiac events (3 studies), physician education (7 studies), patient education (4 studies), and cardiac diagnostic tests (4 studies). Across chronic conditions, ChatGPT-3.5 answered 43/47 (91%) heart failure questions accurately, though most responses required college-level reading. In acute scenarios, GPT-4 provided appropriate guidance in 15/20 (75%) pediatric life support scenarios but gave unsafe advice in 5/20 (25%). In physician and patient education, LLMs demonstrated potential in exam-style assessments, with GPT-4 significantly outperforming physicians on some tasks (P<.001), while readability studies showed improvements in accessibility (P<.001) but also highlighted language complexity. In diagnostic testing, GPT-4 interpreted 36/40 (91%) ECG cases correctly, significantly better than emergency physicians (31/40, 77%; P<.05) but with lower accuracy in echocardiography, reflecting the lack of image-processing capability in the models available at the time.
Conclusions:
LLMs demonstrate potential in cardiology, particularly in education and selected diagnostic tasks. However, their performance is inconsistent across clinical scenarios, with notable weaknesses in emergency response, specialized diagnostics, and patient accessibility. Evidence is limited by small sample sizes, reliance on expert evaluation instead of patient outcomes, and the predominance of in-silico designs, which restrict generalizability. Despite these limitations, LLMs show potential in cardiology, particularly in patient education, ECG interpretation, and exam-style assessments, though performance remains inconsistent across domains.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.