Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Apr 6, 2021
Date Accepted: Jul 27, 2021
The Impact of Explanations on Layperson Trust in AI-Driven Symptom Checker Applications: An Experimental Study
ABSTRACT
Background:
AI-driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver healthcare more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind ‘black box’ decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest why-explanations are better at communicating knowledge and cultivating trust in laypeople.
Objective:
To ascertain whether explanations provided by a symptom checker impact layperson explanatory trust, and whether this trust is impacted by existing knowledge of disease.
Methods:
A cross-sectional survey of 750 healthy participants was conducted. Participants were shown a video of a chatbot simulation which resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either due to its current usage in symptom checkers, or, informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison of means tests were used to evaluate group differences in trust.
Results:
2-3 variables were generated dependent on the treatment group, reflecting the prior knowledge and subsequent mental model participants held. When varying explanation type by disease, migraine was found to be non-significant (P=.65), and temporal arteritis marginal (P=.086). Varying disease by explanation type resulted in significance for Input Influence (P=.001), Social Proof (P=.049) and No Explanation (P=.006) with Counterfactual (P=.053). The results suggest that trust in explanations is significantly impacted by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate a need for symptom checkers to tailor explanations to each participant’s specific question and discount diseases they may also be aware of in order to be successful.
Conclusions:
System builders developing explanations for symptom checking applications should consider the recipient’s knowledge of a disease and tailor explanations to each participant’s specific need. Effort should be placed on generating explanations individual to each user of a symptom checker to fully discount diseases the user may be aware of and to close their information gap.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.