Accepted for/Published in: JMIR AI
Date Submitted: Sep 19, 2024
Open Peer Review Period: Sep 19, 2024 - Nov 14, 2024
Date Accepted: Apr 6, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Comparative Performance of Medical Students, ChatGPT-3.5 and ChatGPT-4.0 in Answering Questions from a Brazilian National Medical Exam: An Observational Study
ABSTRACT
Background:
Artificial intelligence (AI) has advanced significantly in various fields, including medicine, where tools like ChatGPT (GPT) have demonstrated remarkable capabilities in interpreting and synthesizing complex medical data. Since its launch in 2019, GPT has evolved, with version 4.0 offering enhanced processing power, image interpretation, and more accurate responses. In medicine, GPT has been used for diagnosis, research, and education, achieving significant milestones like passing the USMLE board exam. Recent studies show that GPT 4.0 outperforms its earlier versions and medical students on medical exams.
Objective:
This study aimed to evaluate and compare the performance of GPT versions 3.5 and 4.0 on Brazilian Progress Tests (PT) from 2021 to 2023, analyzing their accuracy compared to medical students.
Methods:
A cross-sectional observational study was conducted on 333 multiple-choice questions from the PT, excluding questions with images, nullified, or repeated items. All questions were presented sequentially, without any modification to their structure. The performance of GPT versions was compared using statistical methods, and medical students' scores were included for context.
Results:
GPT 4.0 showed a significant improvement in accuracy, achieving an 87.2% accuracy rate compared to 68.4% for GPT 3.5 (p-value = 0.028), representing a relative improvement of 27.4%. Despite these differences, both GPT 3.5 and 4.0 achieved a higher score than students from all years of medical school.
Conclusions:
GPT 4.0 demonstrates superior accuracy compared to its predecessor in answering medical questions on the PT. The results of this article are similar to other researches, indicating that we are getting closer to new revolution in Medicine.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.