Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Human Factors

Date Submitted: Jul 30, 2025
Open Peer Review Period: Aug 1, 2025 - Sep 26, 2025
Date Accepted: Feb 19, 2026
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

The Role of Explanations in AI-Generated Alerts: Qualitative Study of Clinical Views on Explainable AI in Predictive Tools

Rahman J, Delaforce A, Bradford D, Li J, Magrabi F, Cook D, Brankovic A

The Role of Explanations in AI-Generated Alerts: Qualitative Study of Clinical Views on Explainable AI in Predictive Tools

JMIR Hum Factors 2026;13:e81460

DOI: 10.2196/81460

PMID: 42066251

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

When Explanations Differ: A Qualitative Study of Clinical Views on Explainable AI (XAI) Methods in Healthcare

  • Jessica Rahman; 
  • Alana Delaforce; 
  • Dana Bradford; 
  • Jane Li; 
  • Farah Magrabi; 
  • David Cook; 
  • Aida Brankovic

ABSTRACT

Background:

AI-driven clinical decision support (CDS) tools offer promising solutions for healthcare delivery by optimising resource allocation, detecting deterioration and enabling early interventions. However, adoption remains limited due to insufficient validation and a lack of transparency. eXplainable AI (XAI) provides users with insights into AI-driven recommendations, but discrepancies in explanations, known as the "Disagreement Problem", can undermine trust and at worst, lead to poor clinical decisions.

Objective:

This study explores the perspectives of clinicians from Australian critical care settings on XAI and the impact of discrepancies in AI-generated explanations on decision-making.

Methods:

Qualitative data were collected using semi-structured interviews with 14 clinical experts, incorporating scenario-based exercises, and analysed using inductive thematic analysis.

Results:

Key factors influencing trust in XAI are identified, and the role of explainability in AI-driven tools are highlighted. Explainability was considered valuable, especially in unfamiliar situations or complex decisions, if the explanations were clear, plausible, and actionable. Discrepancies in explanations generated by different XAI methods are not the primary concern for clinicians, provided the AI’s prediction was accurate and the explanations offered actionable insights aligning with their mental model.

Conclusions:

This study has identified design recommendations and implementation strategies for developing trustworthy, user-centric XAI-supported CDS tools. It also highlights that discrepancy between different explanations is not inherently problematic, provided the explanations are consistent with clinicians’ reasoning. Recommendations made highlight the importance of aligning the design and implementation of AI tools with clinicians’ needs to enhance trust, mitigate risks, and promote successful adoption for improved patient outcomes.


 Citation

Please cite as:

Rahman J, Delaforce A, Bradford D, Li J, Magrabi F, Cook D, Brankovic A

The Role of Explanations in AI-Generated Alerts: Qualitative Study of Clinical Views on Explainable AI in Predictive Tools

JMIR Hum Factors 2026;13:e81460

DOI: 10.2196/81460

PMID: 42066251

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.