How Explainable Artificial Intelligence Can Increase or Decrease Clinicians’ Trust in AI Applications in Healthcare: a Systematic Review
ABSTRACT
Background:
Artificial intelligence (AI) offers tremendous potential for clinical use but as, in effect, a “black box” may not be trusted by clinicians. This has spurred the development of explainable AI (XAI), in which AI predictions are accompanied by explanations of how the algorithms reached their conclusions. However, it is unclear whether XAI does improve trust and intention-to-use among physicians.
Objective:
We performed a systematic review of the empirical evidence on XAI’s trust impact among clinicians.
Methods:
We searched PubMed and Web of Science databases following PRISMA guidelines. Studies were included if they empirically measured the impact of XAI on clinicians’ trust, using either cognition- or affect-based measures. Of 778 articles screened, 10 met these criteria.
Results:
All 10 papers drew upon a cognitive-based definition of trust, but two included an affect-based definition in addition. Five articles showed that XAI increased trust among clinicians compared to AI. In three studies, no impact was observed, whereas two studies reported that XAI could both increase and reduce trust, i.e. optimise the trust level, by using different designs, explanation techniques, or abstraction levels, revealing it can be effectively modified.
Conclusions:
In 70% of the studies, it was shown that XAI increased clinicians' trust and intention to use AI, and in two of those, that explanation also could decrease trust. The remaining studies found no effect. When explanations aligned with clinicians' judgment, it could lead to overreliance on AI, whereas complex or contradictory explanations appeared to hinder its use by eroding trust. Future research is needed to better evaluate affect-based trust measures, as well as identify how best to avoid blind trust or blind distrust so as to optimise physicians’ trust.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.