Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Preprints

Date Submitted: May 8, 2026
Open Peer Review Period: May 8, 2026 - Apr 23, 2027
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Decision-Calibrated Explainable AI for Reliability-Aware Clinical Predictions: A Stability-Based Framework

  • Adeyinka Moyinoluwa Adejumobi; 
  • Fatimo Adenike Adeniya

ABSTRACT

Background:

Reliable deployment of machine learning systems in healthcare requires mechanisms for determining whether individual predictions can be trusted. Conventional confidence-based approaches often fail to capture underlying uncertainty, particularly in high-capacity models where predictions may remain highly confident despite unstable reasoning.

Objective:

This study proposes a Decision-Calibrated Explainable AI (DC-XAI) framework for evaluating prediction reliability using stability-based signals derived from both model outputs and feature attribution explanations.

Methods:

The proposed DC-XAI framework integrates two complementary reliability signals: prediction stability under stochastic perturbations and explanation stability measured by feature-attribution consistency. These signals are combined into a three-tier decision system consisting of ACCEPT, ACCEPT WITH CAUTION, and DEFER categories to support reliability-aware clinical decision-making. The framework was evaluated using the MIMIC-IV critical care dataset for in-hospital mortality prediction. Evaluation was conducted using a two-level strategy, comprising a global performance assessment on the full test set (n = 13,074) and a perturbation-based stability analysis on a representative subset (n = 1,000). Logistic Regression, XGBoost, and Multi-Layer Perceptron (MLP) architectures were compared.

Results:

The results revealed a significant Stability–Accuracy Gap across model architectures, demonstrating that predictive performance alone does not reliably reflect prediction trustworthiness. Logistic Regression exhibited a strong monotonic relationship between stability and accuracy, whereas XGBoost demonstrated brittle stability, maintaining stable predictions despite incorrect outputs. The MLP exhibited non-monotonic stability behaviour, where instability in feature attribution did not consistently correspond to prediction failure. These findings indicate that the relationship between stability and reliability is architecture-dependent and that explanation stability alone is insufficient as a universal trust signal.

Conclusions:

The proposed DC-XAI framework provides a practical mechanism for reliability-aware clinical AI deployment by integrating prediction stability and explanation consistency into a triage-based decision process. The findings challenge the assumption that stability is a universal proxy for reliability and highlight the need for architecture-aware trust calibration in safety-critical healthcare AI systems.


 Citation

Please cite as:

Adejumobi AM, Adeniya FA

Decision-Calibrated Explainable AI for Reliability-Aware Clinical Predictions: A Stability-Based Framework

JMIR Preprints. 08/05/2026:100751

DOI: 10.2196/preprints.100751

URL: https://preprints.jmir.org/preprint/100751

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.