Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Mar 2, 2023
Open Peer Review Period: Mar 31, 2023 - May 31, 2023
Date Accepted: Jun 14, 2023
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study

Nov O, Singh N, Mann D

Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study

JMIR Med Educ 2023;9:e46939

DOI: 10.2196/46939

PMID: 37428540

PMCID: 10366957

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Putting ChatGPT’s Medical Advice to the (Turing) Test

  • Oded Nov; 
  • Nina Singh; 
  • Devin Mann

ABSTRACT

Background:

Chatbots could play a role in answering patient questions, but patients’ ability to distinguish between provider and chatbot responses, and patients’ trust in chatbots’ functions are not well established.

Objective:

To assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication.

Methods:

A US representative sample of 430 study participants aged 18 and above was recruited on Prolific, a crowdsourcing platform for academic studies. 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. 53.2% of respondents analyzed were women; their average age was 47.1. Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients’ questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider’s response. In the survey, each patient’s question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots’ functions in patient-provider communication, using a Likert scale of 1-5.

Results:

The correct classification of responses ranged between 49.0% to 85.7% for different questions. On average, chatbot responses were correctly identified 65.5% of the time, and provider responses were correctly distinguished 65.1% of the time. On average, responses toward patients’ trust in chatbots’ functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased.

Conclusions:

ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in healthcare.


 Citation

Please cite as:

Nov O, Singh N, Mann D

Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study

JMIR Med Educ 2023;9:e46939

DOI: 10.2196/46939

PMID: 37428540

PMCID: 10366957

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.