Accepted for/Published in: JMIR Medical Education
Date Submitted: Jul 28, 2023
Open Peer Review Period: Jul 28, 2023 - Sep 22, 2023
Date Accepted: Dec 11, 2023
Date Submitted to PubMed: Dec 19, 2023
(closed for review but you can still tweet)
A mixed-methods evaluation of ChatGPT’s real-life implementation in Undergraduate Dental Education
ABSTRACT
Background:
The recently introduced Artificial Intelligence tool ChatGPT seems to offer a range of benefits in academic education, while also raising concerns. The relevant literature revolves around issues of plagiarism and academic dishonesty, as well as pedagogy and educational affordances, yet no real-life implementation of ChatGPT in the educational process has been reported to our knowledge so far.
Objective:
The aim of this mixed-methods study was to evaluate ChatGPT’s implementation in the educational process, both quantitatively and qualitatively.
Methods:
In March 2023, seventy-seven 2nd-year dental students of the European University Cyprus were divided in two groups and asked to compose a learning assignment on ‘Radiation biology and Radiation protection in the dental office’, working collaboratively in small sub-groups, as part of the educational semester program of the Dentomaxillofacial Radiology module. Designing the research process was challenging for the authors, as this was an early attempt to actually implement ChatGPT in the teaching-learning process and potential challenges had to be identified and resolved ahead, so that the results would not be compromised. One group searched the internet for scientific resources to perform the task and the other group used ChatGPT for this purpose. Both groups developed a PowerPoint presentation based on their researches and presented it in-class. The ChatGPT group students additionally registered all interactions with the language model during the prompting process and evaluated the final outcome; they also answered an open-ended Evaluation Questionnaire, including questions on their learning experience. Finally, all students undertook a knowledge exam on the topic and the grades between the two groups were compared statistically, whereas the Questionnaires were thematically analyzed.
Results:
Out of the 77 students, 39 were assigned in the ChatGPT group and 38 in the Literature research group. Seventy students undertook the MCQ knowledge exam and exam grades ranged from 5-10 on the 0-10 grading scale. The Mann-Whitney test showed that students of the ChatGPT group performed significantly better (p=0.045) than students of the Literature research group. The Evaluation Questionnaires revealed the benefits (human-like interface, immediate response, wide knowledge base), the limitations (need for rephrasing the prompts to get a relevant answer, general content, false citations, incapability to provide images/videos) and the prospects (in education, clinical practice, continuing education, research) of ChatGPT. Students enjoyed working with this new tool and were creative and exploratory in their approaches.
Conclusions:
Students using ChatGPT for their learning assignments performed significantly better in the knowledge exam than their fellow students who used the literature research methodology. Students adapted quickly to the technological environment of the language model, recognized its opportunities and limitations and used it efficiently.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.