Currently submitted to: JMIR Formative Research
Date Submitted: Feb 11, 2026
Open Peer Review Period: Feb 12, 2026 - Apr 9, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Enhancing Clinical Trust in Diabetes Prediction: A Multi-Directional Counterfactual and SHAP-based Decision Support Model
ABSTRACT
Background:
Despite the high accuracy of machine learning models in predicting diabetes, clinical adoption remains limited due to the "black-box" nature of advanced algorithms. In regional healthcare contexts like Ethiopia, fostering clinician trust is essential for the successful implementation of AI-driven tools.
Objective:
This study aims to develop a trustworthy Clinical Decision Support System (CDSS) for diabetes prediction by operationalizing the Asan et al. (2020) trust framework focusing on Ability, Integrity, and Benevolence through the integration of Explainable AI (XAI) techniques.
Methods:
A multi-national dataset of clinical biomarkers was utilized. To ensure model Ability, we employed a robust preprocessing pipeline including Standardization and SMOTE for class balancing. Five architectures were compared: Logistic Regression, Random Forest (RF), Gradient Boosting, XGBoost, and LightGBM. To establish Integrity, this research utilized SHAP (SHapley Additive exPlanations) for global and local transparency. To demonstrate Benevolence, this research applied Diverse Counterfactual Explanations (DiCE) based on Miller’s (2017) theory of contrastive explanation.
Results:
The Random Forest model emerged as the superior architecture, achieving high Macro-averaged AUC and F1-scores. SHAP global analysis validated model integrity by identifying HbA1c, Age, and BMI as the primary diagnostic drivers, aligning with international clinical guidelines. DiCE generated patient-specific "what-if" scenarios, providing clinicians with actionable targets for lifestyle intervention. Preliminary evaluation suggests that providing both "why" (SHAP) and "how to change" (DiCE) explanations significantly enhances perceived clinician trust compared to standard accuracy-only outputs.The final model was deployed as an interactive Streamlit application 1.
Conclusions:
Integrating SHAP and Counterfactual analysis transforms predictive AI into a prescriptive clinical partner. By providing actionable insights that clinicians can relate to their medical knowledge, this integration initiates a culture of XAI examination for clinicians, paving the way for more transparent and patient-centered digital health interventions. Clinical Trial: term1
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.