Currently submitted to: JMIR Formative Research
Date Submitted: Mar 13, 2026
Open Peer Review Period: Mar 27, 2026 - May 22, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Use of Artificial Intelligence and human standardized patients to enhance customer discovery interview skills in medical students: an Observational Cohort Study
ABSTRACT
Background:
There is an ongoing need for medical students to build skills beyond traditional clinical areas in order to best shape the evolving healthcare system and fill a variety of professional roles after graduation. With the emergence of artificial intelligence (AI), new learning methods are available for the delivery of medical education.
Objective:
To develop and evaluate the use of traditional (human) standardized patient engagement and Generative AI-generated standardized patient engagement as a low-stakes way for students to practice customer discovery interviews.
Methods:
An interactive classroom experience was created in the interprofessional Center for Experiential Learning and Simulation (iCELS) to simulate two different customer discovery interview scenarios with human standardized patients and AI-generated standardized patients. The customer discovery interview simulation with human standardized patients was conducted with first year medical students in the Entrepreneurship, Biodesign, and Innovation Pathway starting in 2023. In 2026, AI chatbots were added as an additional part of the simulation experience. Sample questions and answers were developed to generate a common interview experience. Student feedback was collected via a Qualtrics survey immediately after class. Students were asked to rate statements on a 4 or 5 point Likert scale and were also allowed to provide open-ended comments.
Results:
The students gave the simulation with standardized human patients high scores, with almost all agreeing or strongly agreeing that the exercise met learning objectives. Responses to the AI chatbot session had a bimodal distribution, with about 2/3 of students giving the simulation high scores and 1/3 of students giving it low scores.
Conclusions:
he first generation chatbot was able to replicate realistic customer discovery interviews in two different scenarios. In the future we will create transcripts of both the AI chatbot and human standardized patient interviews and use independent raters to score interview quality based on our scoring rubric. Finally, we plan to enhance the chatbots to provide a more immersive and realistic experience for students. The further enhancement of these chatbots will provide students with more opportunities to practice their customer discovery skills.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.