Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Apr 1, 2020
Date Accepted: Jul 2, 2020

The final, peer-reviewed published version of this preprint can be found here:

Evaluating Smart Assistant Responses for Accuracy and Misinformation Regarding Human Papillomavirus Vaccination: Content Analysis Study

Ferrand J, Hockensmith R, Houghton RF, Walsh-Buhi E

Evaluating Smart Assistant Responses for Accuracy and Misinformation Regarding Human Papillomavirus Vaccination: Content Analysis Study

J Med Internet Res 2020;22(8):e19018

DOI: 10.2196/19018

PMID: 32744508

PMCID: 7432152

“Alexa, does the HPV vaccine cause autism?”: Evaluating smart assistant responses for accuracy and misinformation

  • John Ferrand; 
  • Ryli Hockensmith; 
  • Rebecca Fagen Houghton; 
  • Eric Walsh-Buhi

ABSTRACT

Background:

Almost half (46%) of Americans have used a smart assistant (SA) of some kind (e.g., Apple’s Siri) and 25% have used a stand-alone SA (e.g., Amazon Echo). This positions SAs as potentially useful modalities for retrieving health-related information; however, the accuracy of SA responses lacks rigorous evaluation.

Objective:

We evaluated the levels of accuracy, misinformation, and sentiment provided by SAs in response to human papillomavirus (HPV) vaccination-related questions.

Methods:

We systematically examined responses to questions about the HPV vaccine from the 4 most popular SAs: Apple’s Siri, Google’s Assistant, Amazon’s Alexa, and Microsoft’s Cortana. One team member posed 10 questions to each SA and recorded all queries and responses. Two raters independently coded all responses (kappa=0.85). We then assessed differences between the 4 SAs on measures of response accuracy, presence of misinformation, and sentiment regarding the HPV vaccine.

Results:

A total of 103 responses resulted from the 10 questions posed across the 4 SAs. Over half (62%) of SA responses were accurate. We found statistical differences across the SAs, χ2 (2, N = 103) = 7.807, p < .05, with Cortana yielding the greatest proportion of misinformation. Siri returned the greatest proportion of accurate responses (72%), whereas Cortana yielded the lowest proportion of accurate responses (54%). Most response sentiments across SAs were positive (63%) or neutral (18%), but Cortana’s responses yielded the largest proportion of negative sentiments (12%).

Conclusions:

SAs appear to be average-quality sources, regarding HPV vaccination information, with Alexa responding most reliably. Cortana returned the largest proportion of inaccurate responses, most misinformation, and the greatest proportion of results with negative sentiments. More collaboration between tech companies and public health entities is necessary to improve the retrieval of accurate health information via SAs.


 Citation

Please cite as:

Ferrand J, Hockensmith R, Houghton RF, Walsh-Buhi E

Evaluating Smart Assistant Responses for Accuracy and Misinformation Regarding Human Papillomavirus Vaccination: Content Analysis Study

J Med Internet Res 2020;22(8):e19018

DOI: 10.2196/19018

PMID: 32744508

PMCID: 7432152

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.