Accepted for/Published in: JMIR Medical Education
Date Submitted: Jan 11, 2024
Open Peer Review Period: Jan 12, 2024 - Mar 8, 2024
Date Accepted: Aug 15, 2024
(closed for review but you can still tweet)
Assessment of ChatGPT-4 in Family Medicine Board Examinations: An Observational Study Using Advanced AI Learning and Analytical Methods
ABSTRACT
Background:
This research explores the capabilities of ChatGPT-4 in passing the American Board of Family Medicine (ABFM) Certification Examination. Addressing a gap in existing literature, where earlier Artificial Intelligence (AI) models showed limitations in medical board exams, this study evaluates the enhanced features and potential of ChatGPT-4, especially in document analysis and information synthesis.
Objective:
The primary goal is to assess whether ChatGPT-4, when provided with extensive preparation resources and using sophisticated data analysis, can achieve a score equal to or above the passing threshold for the Family Medicine Board Examinations.
Methods:
In this study, ChatGPT-4 was embedded in a specialized subenvironment, "AI Family Medicine Board Exam Taker," designed to closely mimic the conditions of the ABFM Certification Examination. This subenvironment enabled the AI to access and analyze a range of relevant study materials, including a primary medical textbook and supplementary online resources. The AI was presented with a series of past ABFM exam questions, reflecting the breadth and complexity typical of the exam. Emphasis was placed on assessing the AI's ability to interpret and respond to these questions accurately, leveraging its advanced data processing and analysis capabilities within this controlled subenvironment.
Results:
In our study, ChatGPT-4's performance was quantitatively assessed on 300 practice ABFM exam questions. The AI achieved a correct response rate of 88.67% (95% CI: 85.08% to 92.25%) for the Custom Robot version and 87.33% (95% CI: 83.57% to 91.10%) for the Regular version. Statistical analysis, including the McNemar test (P-Value: 0.4533), indicated no significant difference in accuracy between the two versions. Additionally, the Chi-square test for error type distribution (P-Value: 0.3163) revealed no significant variation in the pattern of errors across versions. These results highlight ChatGPT-4's capacity for high-level performance and consistency in responding to complex medical examination questions under controlled conditions.
Conclusions:
The study demonstrates that ChatGPT-4, particularly when equipped with specialized preparation and operating in a tailored subenvironment, shows promising potential in handling the intricacies of medical board examinations. While its performance is comparable to the expected standards for passing the ABFM Certification Examination, further enhancements in AI technology and tailored training methods could push these capabilities to new heights. This exploration opens avenues for integrating AI tools like ChatGPT-4 in medical education and assessment, emphasizing the importance of continuous advancement and specialized training in AI applications in healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.