Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Jun 14, 2023
Date Accepted: Nov 22, 2023

The final, peer-reviewed published version of this preprint can be found here:

Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study

Gandhi AP, Padhi BK, Joesph FK, Aparnavi P, Katkuri S, Dayama S, Satapathy P, Khatib MN, Gaidhane S, Zahiruddin QS, Barboza JJ

Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study

JMIR Form Res 2024;8:e49964

DOI: 10.2196/49964

PMID: 38526538

PMCID: 11002731

Effectiveness of ChatGPT in Answering Undergraduate Community Medicine Subject: A cross-sectional Study from India

  • Aravind P Gandhi; 
  • Bijaya Kumar Padhi; 
  • Felista Karen Joesph; 
  • P Aparnavi; 
  • Sushma Katkuri; 
  • Sonal Dayama; 
  • Prakasini Satapathy; 
  • Mahalaqua Nazli Khatib; 
  • Shilpa Gaidhane; 
  • Quazi Syed Zahiruddin; 
  • Joshuan J. Barboza

ABSTRACT

Background:

Medical students may increasingly use Large Language Models(LLM) in their learning. ChatGPT is an LLM at the forefront of this new development in medical education, with the capacity to respond to multi-disciplinary questions.

Objective:

The present study evaluated the ability of ChatGPT 3.5 to answer the Indian Undergraduate Medical Examination in the subject of community medicine and compared ChatGPT scores with the scores obtained by the students.

Methods:

The study was conducted at a public-funded medical college in Hyderabad, India. The study was based on the internal assessment examination conducted in January 2023 for students in the MBBS Final year Part-I, which included 40 questions from the community medicine syllabus. The same questions were administered as prompts to ChatGPT 3.5, and the responses were recorded. Apart from scoring ChatGPT responses, the two independent evaluators explored the responses to each question under three sub-domains to further analyse their quality: relevancy, coherence, and completeness.

Results:

ChatGPT 3.5 scored 72.3% in Paper I and 61% in Paper II. The mean score of the 94 students was 43% in Paper I and 45% in Paper II. The responses of ChatGPT 3.5 were also rated to be satisfactorily relevant, coherent, and complete for most of the questions (>80%).

Conclusions:

ChatGPT 3.5 might have substantial knowledge to understand and answer the Indian medical undergraduate subject of community medicine. ChatGPT may be introduced to students to enable the self-directed learning of community medicine in the pilot mode under faculty oversight, as it is still in the initial stages where its potential and reliability of medical contents from Indian context needs to be explored, satisfactorily.


 Citation

Please cite as:

Gandhi AP, Padhi BK, Joesph FK, Aparnavi P, Katkuri S, Dayama S, Satapathy P, Khatib MN, Gaidhane S, Zahiruddin QS, Barboza JJ

Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study

JMIR Form Res 2024;8:e49964

DOI: 10.2196/49964

PMID: 38526538

PMCID: 11002731

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.