Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Dec 18, 2020
Date Accepted: Nov 11, 2021
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: A Choice Based Conjoint Survey
ABSTRACT
Background:
Certain types of Artificial Intelligence (AI), i.e. deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment and prevention as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasised the importance of the transparency / explainability of AI decision-making. Transparency / explainability may come at the cost of performance. There is need of a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency / explainability. A public policy should take into account the wider public’s interests in such features of AI.
Objective:
Eliciting the population’s preferences for the performance and explainability of AI decision-making in health care, and to determine if these preferences depend on respondent characteristics including trust in health and technology, and fears and hopes regarding AI.
Methods:
We conducted a choice based conjoint survey of population preferences for attributes of AI decision-making in health care. Initial focus group interviews yielded six attributes playing a role for the respondents’ views on the use of AI decision-support in health care: 1) type of AI decision 2) level of explanation, 3) performance/accuracy, 4) responsibility for the final decision, 5) possibility of discrimination, and 6) severity of the disease to which the AI is applied. 100 unique choice sets were developed in a fractional factorial design. In a 12 task survey respondents were asked about their preference for AI system use in hospitals in relation to three different scenarios.
Results:
Of the 1678 potential respondents 61.2% participated. The respondents consider the doctor having the final responsibility for treatment decisions the most important attribute with 46.8% of the total weight of attributes, followed by the explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). While gender, age, level of education, whether respondents live rurally or in towns, trust in health and technology, and fears and hopes regarding AI do influence the importance allocated to different attributes, they do not play a significant role in the majority of cases.
Conclusions:
If the performance of AI systems in health care is on a par with doctors, it is of greater impor¬tance to the public that doctors are ultimately responsible for diagnostics and treatment planning, that the AI decision support is explainable, and the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information respectively. Clinical Trial: N/A
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.