Accepted for/Published in: JMIR Medical Education
Date Submitted: Aug 2, 2023
Date Accepted: Oct 30, 2023
Evaluating AI Models for the National Pre-Medical Exam in India: A Head-to-Head Analysis of GPT-3.5, GPT-4, and Bard
ABSTRACT
Background:
Large language models (LLMs) have revolutionized Natural Language Processing (NLP) with their ability to generate human-like text through extensive training on large datasets. These models, including GPT-3.5, GPT-4, and Bard, find applications beyond NLP, attracting interest from academia and industry. Students are actively leveraging LLMs to enhance learning experiences and prepare for high-stakes exams, such as the National Eligibility cum Entrance Test (NEET) in India.
Objective:
This comparative analysis aims to evaluate the performance of GPT-3.5, GPT-4, and Bard in answering NEET-2023 questions.
Methods:
In this paper, we evaluate the performance of the three mainstream LLMs, namely GPT-3.5, GPT-4, and Google Bard, in answering questions related to the NEET 2023 exam. The questions of NEET were provided to these AI models, and the responses were recorded and compared against the correct answers from the official answer key. Precision is used to evaluate the performance of all three models.
Results:
It is evident that GPT-4 passed the entrance test with flying colors (43%), showcasing exceptional performance. On the other hand, GPT-3.5 managed to qualify but with a significantly lower score (21%). However, Bard (16%) failed to meet the qualifying criteria and did not pass the test. GPT-4 demonstrated consistent superiority over Bard and GPT-3.5 in all three subjects. Specifically, GPT-4 achieved accuracy rates of 72.5% in Physics, 44.44% in Chemistry, and 50.5% in Biology. Conversely, GPT-3.5 attained an accuracy rate of 45% in Physics, 33.33% in Chemistry, and 34.34% in Biology.
Conclusions:
The study's findings provide valuable insights into the performance of GPT-3.5, GPT-4, and Bard in answering NEET-2023 questions. GPT-4 emerged as the most accurate model, highlighting its potential for educational applications. The results underscore the suitability of LLMs for high-stakes exams and their positive impact on education. Additionally, the study establishes a benchmark for evaluating and enhancing LLMs' performance in educational tasks, promoting responsible and informed use of these models in diverse learning environments.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.