Accepted for/Published in: JMIR Medical Education
Date Submitted: Oct 25, 2023
Open Peer Review Period: Oct 25, 2023 - Nov 8, 2023
Date Accepted: Dec 14, 2023
(closed for review but you can still tweet)
A GPT-powered chatbot as simulated patient to practice history taking: A prospective, mixed-methods study
ABSTRACT
Background:
Communication is a core competency of medical professionals and of utmost importance for patient safety. While medical curricula emphasise communication training, traditional formats such as real or simulated patient interactions can present psychological stress and are limited in repetition. The recent emergence of large language models such as GPT offer an opportunity to overcome these restrictions.
Objective:
The aim of this study was to explore the feasibility of a GPT-driven chatbot to practice history-taking, one of the core competencies of communication.
Methods:
We developed an interactive chatbot interface using GPT 3.5 and a specific prompt including a chatbot-optimised illness script and a behavioural component. Employing a mixed-methods approach, we invited medical students to voluntarily practice history-taking. To determine whether GPT provides suitable answers as a simulated patient, the conversations were recorded and analysed using quantitative and qualitative approaches (Brown & Clark). We analysed the extent to which the questions and answers aligned with the provided script as well as the medical plausibility of the answers. Finally, the students filled out the Chatbot Usability Questionnaire (CUQ).
Results:
A total of 28 students practiced with our chatbot (23.4 ± 2.9 years of age). We recorded a total of 826 question–answer pairs (QAPs), with a median of 27.5 QAPs per conversation and 94.7% pertaining to history taking. When questions were explicitly covered by the script (60.3%), the GPT-provided answers were mostly based on explicit script information (94.4%). For questions not covered by the script (23.4%), the GPT answers used 56.4% fictitious information. Regarding plausibility, 97.9% (n = 842) QAPs were rated as plausible. Of the 14 implausible answers, GPT provided answers rated as ‘socially desirable’, ‘leaving role identity’, ‘ignoring script information’, ‘illogical reasoning’ and ‘calculation error’. Despite these results, however, the CUQ revealed an overall positive user experience (77/100 points).
Conclusions:
Our data show LLMs such as GPT can provide a simulated patient experience and yield a good user experience and a majority of plausible answers. Our analysis revealed the GPT-provided answers used either explicit script information or answers based on the available information, which can be understood as abductive reasoning. Although rare, the GPT-based chatbot provided implausible information in some instances, with the major tendency being ‘socially desirable’ instead of ‘medically plausible’ information.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.