Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: May 29, 2023
Open Peer Review Period: May 26, 2023 - Jun 14, 2023
Date Accepted: Sep 29, 2023
(closed for review but you can still tweet)
Use of Rapid Internet Surveys to Assess Healthcare Trainees’ and Professionals’ Perceptions of Internet Generative Pre-trained Large Language Model, ChatGPT, in Improving Medical Knowledge Training
ABSTRACT
Background:
ChatGPT is a powerful language learning model. It demonstrated both the potential and concerns in education. The underlying impacts of ChatGPT on education remain unclear.
Objective:
We aimed to investigate the perception of healthcare students in ChatGPT-assisted learning in a biomedical informatics class.
Methods:
We used purposeful sampling to include all undergraduate and graduate students (n=195) in the School of Public Health at the National Defense Medical Center in Taiwan. Subjects were asked to watch a two-minute video introducing the ChatGPT-assisted class in biomedical informatics and answer a self-designed e-questionnaire according to the Kirkpatrick Model, which included 12 questions and four constructs, “perceived Knowledge Acquisition (KA),” ” perceived Learning Motivation (LM),” ” perceived Learning Satisfaction (LS),” and “perceived Learning Effectiveness (LE).” The data were analyzed using the structural equation model (SEM) and thematic analysis.
Results:
The e-questionnaire response rate was 78%. 152 students were recruited for the analysis, with 58% undergraduate and 59% women. The ages ranged from 18 to 53 years (mean: 23.3±6.0). There was no difference in perceived learning evaluation between men and women, while graduate students scored significantly higher on all questions than undergraduate students. The majority of healthcare students were enthusiastic about the ChatGPT-assisted biomedical informatics class. Nevertheless, some students expressed their concerns about the potential of using ChatGPT to cheat on exams. The average scores of KA, LM, LS, and LE were 3.84±0.80, 3.76±0.93, 3.75±0.87, and 3.72±0.91, respectively (Likert scale 1~5, strongly disagree to strongly agree). KA gained the highest score and LE the lowest. In the SEM results, KA had a direct effect on LE, LS, and LM with the β coefficients of 0.80, 0.87, and 0.97, respectively (all p values <0.001). LM and LE were correlated with each other (β= 0.74, p<0.001). LS had no significant effect on LE in this study.
Conclusions:
The majority of healthcare students are enthusiastic about taking the ChatGPT-assisted biomedical informatics class. However, the physical presences of the actual teachers are required for students to seek guidance and engage in a dual discussion to improve learning effectiveness. Clinical Trial: NA
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.