Accepted for/Published in: JMIR Medical Education
Date Submitted: Apr 19, 2023
Open Peer Review Period: Apr 19, 2023 - Jun 14, 2023
Date Accepted: May 17, 2023
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
ABSTRACT
The integration of large language models (LLMs), such as Generative Pre-trained Transformers (GPT), into medical education has the potential to transform learning experiences for students and elevate their knowledge, skills, and competence. Examples of promising applications of LLMs include curriculum development, augmenting teaching methodologies, crafting personalized study plans and learning materials, designing comprehensive assessment plans, improving the evaluation process, interpreting unstructured medical data, facilitating medical research, and implementing programmatic enhancements for medical education programs. However, the use of LLMs in medical education raises several challenges related to algorithmic bias, overreliance, plagiarism, misinformation, inequity, privacy, and copyright concerns. As the educational paradigm shifts from information-driven to AI-driven practices, it is crucial to explore the full potential of generative LLMs technologies while addressing the concerns and challenges that arise in medical education to better understand how to utilize such tools effectively and appropriately. The objective of this paper is to explore the opportunities and challenges of using LLMs in medical education. The insights gleaned from this analysis will serve as a foundation for future recommendations and best practices in the field, fostering the responsible and effective use of AI technologies in medical education.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.