Accepted for/Published in: JMIR Medical Education
Date Submitted: Jul 31, 2023
Date Accepted: Sep 27, 2023
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Exploring the possible use of AI Chatbots in Public Health Education: A Feasibility Study
ABSTRACT
Background:
Artificial intelligence (AI) is a rapidly developing field with the potential to transform various aspects of healthcare and public health, including medical training. During the 'Hygiene and Public Health' course for fifth-year medical students, a practical training session was conducted on vaccination using AI chatbots as an educational supportive tool. Before receiving specific training on vaccination, the students were given an online test extracted from the Italian National Medical Residency Test (SSM) . After completing the test, a critical correction of each question was performed assisted by AI chatbots.
Objective:
The main aim was to identify whether AI chatbots can be considered as educational support tools for training in public health. Secondary objective was to assess the performance of different AI chatbots on complex multiple choice medical questions in Italian language.
Methods:
A test composed of 15 multiple-choice questions on vaccination was extracted from the SSM using targeted keywords and administered to medical students via Google Forms and to different AI chatbot models. The correction of the test was conducted in the classroom, focusing on the critical evaluation of the explanations provided by the chatbot. A Mann-Whitney U test was conducted to compare the performances of medical students and AI chatbots. Student feedback was collected anonymously at the end of the training experience.
Results:
36 medical students and 9 AI chatbot models completed the test. The students achieved an average score of 8.22/15 (SD2.65 ), while the AI chatbots scored an average of 12.22/15 (SD 2.77). The results indicated a statistically significant difference in performance between the two groups (U = 49.5, P< .001), with a large effect size (r = 0.69). When divided by question type (Direct, Scenario-Based, Negative), significant differences were observed in 'Direct' (P < .0001) and 'Scenario-Based' (P<.0001) questions, but not in 'Negative' questions. The students reported a high level of satisfaction (7.9/10) with the educational experience, expressing a strong desire to repeat the experience (7.6/10).
Conclusions:
AI chatbots demonstrated their efficacy in answering complex medical questions related to vaccination and providing valuable educational support. Their performance significantly surpassed that of medical students in 'Direct' and 'Scenario-Based' questions. The responsible and critical use of AI chatbots can enhance medical education, making it an essential aspect to integrate into the educational system.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.