Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Sep 29, 2023
Date Accepted: Sep 17, 2024

The final, peer-reviewed published version of this preprint can be found here:

How Explainable Artificial Intelligence Can Increase or Decrease Clinicians’ Trust in AI Applications in Health Care: Systematic Review

Rosenbacke R, Melhus Ă, McKee M, Stuckler D

How Explainable Artificial Intelligence Can Increase or Decrease Clinicians’ Trust in AI Applications in Health Care: Systematic Review

JMIR AI 2024;3:e53207

DOI: 10.2196/53207

PMID: 39476365

PMCID: 11561425

How Explainable Artificial Intelligence Can Increase or Decrease Clinicians’ Trust in AI Applications in Healthcare: a Systematic Review

  • Rikard Rosenbacke; 
  • Ă…sa Melhus; 
  • Martin McKee; 
  • David Stuckler

ABSTRACT

Background:

Artificial intelligence (AI) offers tremendous potential for clinical use but as, in effect, a “black box” may not be trusted by clinicians. This has spurred the development of explainable AI (XAI), in which AI predictions are accompanied by explanations of how the algorithms reached their conclusions. However, it is unclear whether XAI does improve trust and intention-to-use among physicians.

Objective:

We performed a systematic review of the empirical evidence on XAI’s trust impact among clinicians.

Methods:

We searched PubMed and Web of Science databases following PRISMA guidelines. Studies were included if they empirically measured the impact of XAI on clinicians’ trust, using either cognition- or affect-based measures. Of 778 articles screened, 10 met these criteria.

Results:

All 10 papers drew upon a cognitive-based definition of trust, but two included an affect-based definition in addition. Five articles showed that XAI increased trust among clinicians compared to AI. In three studies, no impact was observed, whereas two studies reported that XAI could both increase and reduce trust, i.e. optimise the trust level, by using different designs, explanation techniques, or abstraction levels, revealing it can be effectively modified.

Conclusions:

In 70% of the studies, it was shown that XAI increased clinicians' trust and intention to use AI, and in two of those, that explanation also could decrease trust. The remaining studies found no effect. When explanations aligned with clinicians' judgment, it could lead to overreliance on AI, whereas complex or contradictory explanations appeared to hinder its use by eroding trust. Future research is needed to better evaluate affect-based trust measures, as well as identify how best to avoid blind trust or blind distrust so as to optimise physicians’ trust.


 Citation

Please cite as:

Rosenbacke R, Melhus Ă, McKee M, Stuckler D

How Explainable Artificial Intelligence Can Increase or Decrease Clinicians’ Trust in AI Applications in Health Care: Systematic Review

JMIR AI 2024;3:e53207

DOI: 10.2196/53207

PMID: 39476365

PMCID: 11561425

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.