Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jul 28, 2021
Open Peer Review Period: Jul 16, 2021 - Aug 5, 2021
Date Accepted: Nov 20, 2021
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Commercial Voice Assistants’ Answers to Health-related Questions in Noncommunicable Disease Management: Factorial Experiment Assessing Response Rate and Source of Information
ABSTRACT
Background:
Complications related to noncommunicable diseases are among the main causes of mortality. Fostering patients’ access to health-related information through efficient and accessible channels like commercial voice assistants (CVA) such as Amazon Alexa, Apple Siri, or Google Assistant, may support patients’ ability to make health-related decisions and manage their chronic conditions.
Objective:
This study aims to evaluate the ability of CVAs in providing expertise-based voice responses to questions related to noncommunicable disease management.
Methods:
We collected health-related frequently asked questions from health organizations, government, medical non-profit and popular websites about conditions associated with Alzheimer’s disease (AD), lung cancer (LC), chronic obstructive pulmonary disease (COPD), diabetes mellitus (DM), cardiovascular disease (CVD), kidney disease (KD), and cerebrovascular disease (CV). The questions were then validated with practicing medical specialists and the most frequent ones selected, resulting in a pool of 144. We submitted the selected questions to CVAs in a 3x3 fractional factorial design experiment with three developers (ie, Amazon, Apple, and Google) and three modalities (ie, voice-only, voice-and-display, display-only). The condition Google display-only was operationalized with Google Search (our gold standard of information lookup). We assessed whether the CVA provided a voice response (ie, response rate) and what type of web source was used (ie, Expert, Commercial, Crowdsourced, or Not stated).
Results:
Amazon and Google showed a slightly higher voice response rate in voice-only (76.4% and 97.2%, respectively), compared to voice-and-display (74% and 92.4%, respectively). Apple showed the opposite (16% voice-only, 16.7% voice-and-display). Source type was mostly Expert in Amazon (77.3% voice-only, 76.6% voice-and-display) and Google (70.7% voice-only, 73.7% voice-and-display). Apple mostly used Commercial (30.4% voice-only, 29.2% voice-and-display), Crowdsourced (21.7% voice-only, 33.3% voice-and-display) sources, or stated no source (39.1% voice-only, 29.2% voice-and-display). Moreover, Amazon showed the highest response rate for LC (88%), while Apple did so for COPD (20%), and Google for AD (100%). Amazon and Google always used Expert sources for AD, while Apple never did so. However, Apple used the most Expert sources for CD (50%).
Conclusions:
None of the tested CVAs was the absolute best in responding to questions about noncommunicable disease management. CVAs seem to perform differently depending on the noncommunicable disease in question. We urge health organizations to collaborate with Google, Amazon, and Apple to allow their CVAs consistently providing reliable answers to health-related questions on noncommunicable disease management.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.