Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: May 16, 2023
Date Accepted: Jul 25, 2023

The final, peer-reviewed published version of this preprint can be found here:

Performance of ChatGPT on the Situational Judgement Test—A Professional Dilemmas–Based Examination for Doctors in the United Kingdom

Borchert RJ, Hickman CR, Pepys J, Sadler TJ

Performance of ChatGPT on the Situational Judgement Test—A Professional Dilemmas–Based Examination for Doctors in the United Kingdom

JMIR Med Educ 2023;9:e48978

DOI: 10.2196/48978

PMID: 37548997

PMCID: 10442724

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Performance of ChatGPT on the Situational Judgement Test; A professional dilemmas exam for doctors in the UK

  • Robin Jacob Borchert; 
  • Charlotte Rachel Hickman; 
  • Jack Pepys; 
  • Timothy J Sadler

ABSTRACT

ChatGPT is a language model which has performed well on professional exams in the fields of medicine, law and business. We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT); a national exam taken by all final-year medical students in the United Kingdom (UK). The exam is designed to assess attributes such as communication, team-working, patient safety, prioritisation skills, professionalism and ethics. It differs from other medical exams, such as the United States Medical Licensing Exam (USMLE), as it relies less on memorisation and factual recall. Overall, ChatGPT’s performance was impressive scoring 76% on the SJT but scoring full marks on only a minority of the questions (9%) which may reflect possible flaws in ChatGPT’s situational judgement and/or inconsistencies in the reasoning across questions in the exam itself. ChatGPT performed consistently across the four outlined domains in Good Medical Practice for Doctors. Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for the purpose of standardising questions and providing consistent rationales for examinations involving professionalism and ethics.


 Citation

Please cite as:

Borchert RJ, Hickman CR, Pepys J, Sadler TJ

Performance of ChatGPT on the Situational Judgement Test—A Professional Dilemmas–Based Examination for Doctors in the United Kingdom

JMIR Med Educ 2023;9:e48978

DOI: 10.2196/48978

PMID: 37548997

PMCID: 10442724

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.