Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: May 12, 2025
Date Accepted: Sep 22, 2025
Large language models for rare hematologic disease diagnosis: retrospective performance and prospective impact on physicians
ABSTRACT
Background:
Rare hematologic diseases are frequently underdiagnosed or misdiagnosed due to their clinical complexity. Whether new-generation large language models (LLMs), particularly those employing chain-of-thought (CoT) reasoning, can improve diagnostic accuracy remains unclear.
Objective:
To evaluate the diagnostic performance of new-generation commercial LLMs in rare hematologic diseases and to determine whether LLM output enhances physicians’ diagnostic accuracy.
Methods:
We conducted a two-phase study. In the retrospective phase, we evaluated seven mainstream LLMs on 158 non-public real-world admission records covering nine rare hematologic diseases, assessing diagnostic performance using Top-10 accuracy and mean reciprocal rank (MRR), and evaluating ranking stability via Jaccard similarity and entropy. Spearman’s rank correlation was used to examine the association between physicians’ diagnoses and LLM-generated outputs. In the prospective phase, 28 physicians with varying levels of experience diagnosed five cases each, gaining access to LLM-generated diagnoses across three sequential steps to assess whether LLMs can improve diagnostic accuracy.
Results:
In the retrospective phase, ChatGPT-o1-preview demonstrated the highest Top-10 accuracy (70.3%) and MRR (0.577), achieving performance comparable to that of human physicians. DeepSeek-R1 ranked second. Diagnostic performance was low for AL amyloidosis, Castleman disease, Erdheim-Chester disease, and POEMS syndrome. Interestingly, higher accuracy often correlated with lower ranking stability across most LLMs. The physician performance showed a strong correlation with both Top-10 accuracy (ρ = 0.565) and MRR (ρ = 0.650). In the prospective phase, LLMs significantly improved the diagnostic accuracy of less-experienced physicians, raising their performance to specialist levels; no significant benefit was observed for specialists. However, when LLMs generated biased responses, physician performance often failed to improve or even declined.
Conclusions:
Without fine-tuning, new-generation commercial LLMs can identify correct diagnoses for rare hematologic diseases with accuracy comparable to that of physicians and can elevate the diagnostic performance of less-experienced physicians to specialist levels. Nevertheless, biased LLM outputs may mislead clinicians, highlighting the need for critical appraisal and cautious clinical integration. Clinical Trial: Chinese Clinical Trial Registry Identifier: ChiCTR2400089959.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.