Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 6, 2025
Date Accepted: Nov 30, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Using ChatGPT-4o, Claude 3, and DeepSeek for BI-RADS Category 4 Classification and Malignancy Prediction on Mammography: A Retrospective Diagnostic Study
ABSTRACT
Background:
Mammography is a key imaging modality for breast cancer screening and diagnosis, with the Breast Imaging Reporting and Data System (BI-RADS) providing standardized risk stratification. However, BI-RADS category 4 lesions pose a diagnostic challenge due to their wide malignancy probability range and significant overlap between benign and malignant findings. Moreover, current interpretations rely heavily on radiologists’ expertise, leading to variability and potential diagnostic errors. Recent advances in large language models (LLMs), such as ChatGPT-4o, Claude 3, and DeepSeek, offer new possibilities for automated medical report interpretation. This study investigates the feasibility of using LLMs to assist in subcategorizing BI-RADS category 4 lesions and predicting malignancy based on free-text mammography reports.
Objective:
This study aims to explore the feasibility of LLMs in evaluating the benign or malignant subcategories of BI-RADS category 4 lesions based on free-text mammography reports.
Methods:
A retrospective analysis was conducted on BI-RADS category 4 mammography reports written in Chinese between May 2021 and March 2024. Both junior and senior radiologists, as well as three LLMs (ChatGPT-4o, Claude 3-Opus, and DeepSeek), were asked to assign specific subcategories (4A, 4B, 4C) based on the original imaging descriptions. Pathology was using as the reference standard, the reproducibility of LLMs predictions was assessed. The diagnostic performance of radiologists and LLMs was compared, and the internal reasoning behind LLMs misclassifications was analyzed.
Results:
A total of 307 patients (mean age: 47.25 ± 11.39 years) were included in the study. Among the LLMs, ChatGPT-4o showed higher reproducibility than DeepSeek and Claude 3-Opus (0.850 vs. 0.824 and 0.732, P<.01, respectively). Although the prediction accuracy of LLMs was lower than that of the radiologists (senior: 74.5%, junior: 72.0%, DeepSeek: 63.5%, ChatGPT-4o: 62.4%, Claude 3-Opus: 60.8%), their sensitivity was higher (senior: 80.7%, junior: 68.0%, DeepSeek: 84.0%, ChatGPT-4o: 84.7%, Claude 3-Opus: 92.7%). The AUC of DeepSeek was 0.64 (95% CI 0.57-0.70), slightly higher than that of ChatGPT-4o (0.62, 95% CI 0.56-0.69) and Claude 3-Opus (0.61, 95% CI 0.54-0.67).
Conclusions:
LLMs are feasible for distinguishing between benign and malignant lesions in BI-RADS category 4, with good stability and high sensitivity. They show potential in screening and may assist radiologists in reducing missed diagnoses.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.