Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 6, 2025
Date Accepted: Nov 30, 2025
Performance of ChatGPT-4o, Claude 3, and DeepSeek in BI-RADS Category 4 Classification and Malignancy Prediction from Mammography Reports: A Retrospective Diagnostic Study
ABSTRACT
Background:
Mammography is a key imaging modality for breast cancer screening and diagnosis, with the Breast Imaging Reporting and Data System (BI-RADS) providing standardized risk stratification. However, BI-RADS category 4 lesions pose a diagnostic challenge due to their wide malignancy probability range and significant overlap between benign and malignant findings. Moreover, current interpretations rely heavily on radiologists’ expertise, leading to variability and potential diagnostic errors. Recent advances in large language models (LLMs), such as ChatGPT-4o, Claude 3, and DeepSeek, offer new possibilities for automated medical report interpretation. This study investigates the feasibility of using LLMs to assist in subcategorizing BI-RADS category 4 lesions and predicting malignancy based on free-text mammography reports.
Objective:
This study aims to explore the feasibility of LLMs in evaluating the benign or malignant subcategories of BI-RADS category 4 lesions based on free-text mammography reports.
Methods:
This retrospective, single-center study included 307 patients (mean age 47.25 ± 11.39 years) with BI-RADS category 4 mammography reports between May 2021 and March 2024. Three LLMs (ChatGPT-4o, Claude 3-Opus, and DeepSeek), along with junior and senior radiologists, independently assigned subcategories (4A, 4B, 4C) based on the original imaging descriptions. Pathology served as the reference standard, the reproducibility of LLMs predictions was assessed. The diagnostic performance of radiologists and LLMs was compared, and the internal reasoning behind LLMs misclassifications was analyzed.
Results:
ChatGPT-4o demonstrated higher reproducibility than DeepSeek and Claude 3-Opus (0.850 vs. 0.824 and 0.732, respectively). Although the overall accuracy of LLMs was lower than that of radiologists (senior: 74.5%, junior: 72.0%, DeepSeek: 63.5%, ChatGPT-4o: 62.4%, Claude 3-Opus: 60.8%), their sensitivity was higher (senior: 80.7%, junior: 68.0%, DeepSeek: 84.0%, ChatGPT-4o: 84.7%, Claude 3-Opus: 92.7%), while specificity remained lower (senior: 68.3%, junior: 76.1%, DeepSeek: 43.0%, ChatGPT-4o: 40.1%, Claude 3-Opus: 28.9%). DeepSeek achieved the best prediction accuracy among LLMs with an AUC of 0.64 (95% CI 0.57–0.70), followed by ChatGPT-4o (0.62, 95% CI 0.56–0.69) and Claude 3-Opus (0.61, 95% CI 0.54–0.67). By comparison, junior and senior radiologists attained higher AUCs of 0.72 (95% CI 0.66–0.78) and 0.75 (95% CI 0.69–0.80), respectively. DeLong testing confirmed that three LLMs performed significantly worse than both junior and senior radiologists (all P < .05), while no significant difference was observed between the two radiologist groups (P = .550).
Conclusions:
LLMs are feasible for distinguishing between benign and malignant lesions in BI-RADS category 4, with good stability and high sensitivity, but relatively insufficient specificity. They show potential in screening and may assist radiologists in reducing missed diagnoses.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.