Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Feb 2, 2024
Date Accepted: Jun 27, 2024
Integrating ChatGPT in Orthopedic Education for Medical Undergraduates: A randomized trial
ABSTRACT
Background:
ChatGPT is a natural language processing model developed by OpenAI that can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.
Objective:
The purpose of this study is to determine the accuracy of ChatGPT in answering multiple-choice questions(MCQs) related to orthopaedics and whether ChatGPT as an auxiliary learning aid can enhance the learning effect of orthopaedics in the short term through a randomised controlled study. In addition, the results of the disciplines' final examination can be used to evaluate the long-term impact of ChatGPT on students' performance in other subjects.
Methods:
We firstly evaluated ChatGPT's accuracy on MCQs pertaining to orthopaedics across various question formats. Then, 129 undergraduate medical students participated in a randomised controlled study in which the ChatGPT group utilised ChatGPT as a learning tool while the control group was prohibited from using AI software to support learning. Following a two-week intervention, the two groups of students' understanding of orthopaedics was assessed by an orthopaedic test, and variations in the two groups' performance in other disciplines were noted through a follow-up at the end of the semester.
Results:
ChatGPT4.0 correctly answered 1051 orthopaedics-related MCQs(742/1051) with a 70.60% accuracy rate, including 71.8% accuracy for A1 MCQs(237/330), 73.7% accuracy for A2 MCQs(330/448), 70.2% accuracy for A3/4 MCQs(92/131), and 58.5% accuracy for case analysis MCQs(83/142). As of April 7, 2023, a total of 129 individuals participated in the experiment; however, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished this trial and completed all follow-up work. After we intervened in the learning style of students in the short term, the ChatGPT group answered more questions correctly than the control group (10.40±4.98, P=.04) in the orthopaedic test, particularly on A1 (4.40±1.75, P=.01), A2 (3.93±1.95, P=.047), and A3/4 MCQs (3.11±0.96, P=.002). At the end of the semester, we found that participants in the ChatGPT group performed better on final examinations in surgery (4.00±1.72, P=.02) and obstetrics and gynaecology (3.45±1.68, P=.04) than the control group.
Conclusions:
ChatGPT answers orthopaedics MCQs with a high degree of accuracy, and students in the ChatGPT group performed better on both short-term and long-term follow-up. Our finding offers compelling evidence in favour of using ChatGPT in medical education, opening up new possibilities for more effective contemporary instruction. Clinical Trial: Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html?id=225740&v=1.0
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.