Accepted for/Published in: JMIR Medical Education
Date Submitted: Mar 29, 2024
Open Peer Review Period: Apr 1, 2024 - May 27, 2024
Date Accepted: Dec 4, 2024
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Factors Associated with Accuracy of Large Language Models Artificial Intelligence in Basic Medical Science Examinations: Cross-Sectional Study
ABSTRACT
Background:
Artificial intelligence (AI) is widely applied across several industries, including medical education. The content validation and its answers are based on training datasets and the optimization of each model. The accuracy of large language models (LLMs) AI in basic medical examinations and the factors related to its accuracy have been explored.
Objective:
Aimed at evaluating factors associated with the accuracy of large language models (ChatGPT, GPT-4, Google Bard, and Microsoft Bing) in answering multiple-choice questions from basic medical science examinations.
Methods:
We employed questions that were closely aligned with the content and topic distribution of Thailand's Step 1 National Medical Licensing Examination. Variables such as the difficulty index, discrimination index, and question characteristics were collected. These questions were then simultaneously input into ChatGPT, GPT-4, Microsoft Bing, and Google Bard, and their responses were recorded. The accuracy of these LLMs and their association factors were analyzed using multivariable logistic regression. This analysis aimed to assess the effect of various factors on model accuracy, with results reported as Odds ratio (OR).
Results:
The study revealed GPT-4 as the top-performing model with an overall accuracy of 89.07% (95% CI 84.76 - 92.41), significantly outperforming the others (p < 0.001). Microsoft Bing followed with an accuracy of 83.69% (95% CI 78.85 - 87.80), ChatGPT at 67.02% (95% 61.20 - 72.48), and Google Bard at 63.83% (95% CI 57.92 - 69.44). The multivariable logistic regression showed a correlation between question difficulty and model performance, with GPT-4 demonstrating the strongest association. Interestingly, no significant correlation was found between model accuracy and question length, negative wording, clinical scenarios, or the discrimination index for most models, except for Google Bard, which showed varying correlations.
Conclusions:
The GPT-4 and Microsoft Bing models demonstrated equal and superior accuracy compared to ChatGPT and Google Bard in the domain of basic medical science. The accuracy of these models is significantly influenced by the item's difficulty index (p). Indicating that the LLMs have more accuracy on easier questions. This suggests that the more accurate models, such as GPT-4 and Bing, can be valuable tools for understanding and learning basic medical science concepts.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.