Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: May 26, 2021
Date Accepted: Sep 18, 2021

The final, peer-reviewed published version of this preprint can be found here:

Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment

Bickmore T, Ólafsson S, O'Leary T

Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment

J Med Internet Res 2021;23(11):e30704

DOI: 10.2196/30704

PMID: 34751661

PMCID: 8663571

Mitigating Patient and Consumer Safety Risks when using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment

  • Timothy Bickmore; 
  • Stefán Ólafsson; 
  • Teresa O'Leary

ABSTRACT

Background:

Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants, such as Apple’s Siri and Amazon’s Alexa, for medical information.

Objective:

The aim of this study is to evaluate two approaches to reducing the likelihood of patients or consumers acting on potentially harmful medical information they receive from conversational assistants.

Methods:

Participants were given medical problems to pose to conversational assistants that were previously demonstrated to result in potentially harmful recommendations. The conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query, or a disclaimer message or not, telling participants that they should not act on the advice without first talking to a doctor. Participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. Reported actions were recorded and analyzed, and participants were interviewed at the end of each interaction.

Results:

Thirty-two (32) subjects completed the study, each interacting with four conversational assistants. Subjects were on average 42.44±14.08 years old, 53% female, and 66% college educated. Participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice from the conversational assistant, χ2(1)=3.1, p<0.05. Participants who heard a disclaimer message were significantly more likely to say they would contact a doctor or health professional before acting on the medical advice received, χ2(1)=43.5. p<0.05).

Conclusions:

Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice.


 Citation

Please cite as:

Bickmore T, Ólafsson S, O'Leary T

Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment

J Med Internet Res 2021;23(11):e30704

DOI: 10.2196/30704

PMID: 34751661

PMCID: 8663571

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.