Accepted for/Published in: JMIR Medical Education
Date Submitted: Jun 20, 2024
Open Peer Review Period: Jun 21, 2024 - Aug 16, 2024
Date Accepted: Mar 18, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Digital Shift: Assessing ChatGPT’s Capability as a New Age Standardized Patient.
ABSTRACT
Background:
Standardized patients (SPs) have been crucial in medical education, offering realistic patient interactions to students. Despite their benefits, SP training is resource-intensive, and access can be limited. Advances in artificial intelligence, particularly with large language models like ChatGPT, present new opportunities for virtual SPs, potentially addressing these limitations.
Objective:
To assess medical students' perceptions and experiences of using ChatGPT as a standardized patient (SP) and to evaluate ChatGPT’s effectiveness in performing as a virtual SP in a medical school setting.
Methods:
This qualitative study, approved by the AUA Institutional Review Board (IRB), involved eleven medical student volunteers (5 females, 6 males, aged 20-32) from the American University of Antigua (AUA) College of Medicine. Students were observed during a live role-play, interacting with ChatGPT as an SP using a predetermined prompt. A structured 15-question survey was administered before and after the interaction. Thematic analysis was conducted on the transcribed and coded responses, with inductive category formation.
Results:
Thematic analysis identified key themes pre-interaction including: technology limitations (e.g., prompt engineering difficulties), learning efficacy (e.g., potential for personalized learning, reduced interview stress), verisimilitude (e.g., absence of visual cues), and trust (e.g., concerns about AI accuracy). Post-interaction, students noted improvements in prompt engineering, some alignment issues (e.g., limited responses on sensitive topics), maintained learning efficacy (e.g., convenience, repetition), and continued verisimilitude challenges (e.g., lack of empathy and non-verbal cues). No significant trust issues were reported post-interaction. Despite some limitations, students found ChatGPT a valuable supplement to traditional SPs, enhancing practice flexibility and diagnostic skills.
Conclusions:
ChatGPT can effectively augment traditional SPs in medical education, offering accessible, flexible practice opportunities. However, it cannot fully replace human SPs due to limitations in verisimilitude and prompt engineering challenges. Integrating prompt engineering into medical curricula and continuous advancements in artificial intelligence (AI) are recommended to enhance the utility of virtual SPs.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.