Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 7, 2025
Open Peer Review Period: Jul 21, 2025 - Sep 15, 2025
Date Accepted: Feb 11, 2026
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
PECertainty: Data-Efficient Language Model for Assessing Pulmonary Embolism Diagnostic Certainty in Radiology Reports
ABSTRACT
Background:
Computed tomography pulmonary angiography (CTPA) is the standard imaging modality for diagnosing pulmonary embolism (PE), but diagnostic uncertainty is common due to technical limitations and vague language, leading to inconsistent interpretation and clinician frustration.
Objective:
This study develops a prompt-free, data-efficient method for assessing diagnostic certainty of pulmonary embolism (PE) in computed tomography pulmonary angiography (CTPA) reports using small pre-trained language models.
Methods:
This study examined 173 consecutive CTPA reports from UMass Memorial Health, each annotated by three radiologists for PE diagnostic certainty. We developed PECertainty, a lightweight, prompt-free model, and compared it with advanced large language model (LLM)-based methods under limited supervision settings. Baselines included prompt-free methods (SVM, Random Forest, RoBERTa) and prompt-dependent methods (LLM fine-tuning, In-context learning, ADAPET). Sensitivity analyses assessed performance under varying training data sizes. Model performance was evaluated against radiologist annotations. External validation was conducted on 420 CTPA reports from Baystate Medical Center. We further examined interpretability using integrated gradients and prompt-based explanations for top performers (PECertainty, GPT-3.5).
Results:
Among prompt-dependent methods, GPT-3.5 fine-tuning (F1, 0.86; 95% CI: 0.71-1.0) and in-context learning (F1, 0.87; 95% CI: 0.71-1.0) performed best. Among prompt-free methods, PECertainty (F1, 0.92; 95% CI: 0.79-1.0) outperformed the others by a substantial margin while matching the best prompt-dependent methods. RoBERTa fine-tuning lagged (F1, 0.52; 95% CI: 0.35-0.71), and simple models like SVM and Random Forests underperformed. In few-shot settings (10 examples/category), PECertainty (F1, 0.80; 95% CI: 0.59-0.94) outperformed GPT-3.5 fine-tuning (F1, 0.74; 95% CI: 0.58-0.88) and in-context learning (F1, 0.65; 95% CI: 0.47-0.83). While a robust performer, PECertainty falls short in model interpretability compared to fine-tuned GPT-3.5 based on Radiologists' preferences.
Conclusions:
PECertainty is an efficient, open-source alternative to proprietary LLMs for assessing PE diagnostic certainty in low-resource settings, though interpretability remains an area for improvement.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.