Accepted for/Published in: JMIR Medical Education
Date Submitted: Mar 14, 2024
Date Accepted: Nov 23, 2024
Performance of plug-in augmented ChatGPT and its Ability to Quantify Uncertainty: A Simulation Study on the German Medical Board Examination
ABSTRACT
Background:
The Generative Pre-trained Transformer (GPT-4) is a large language model (LLM) trained and fine-tuned on an extensive dataset. After the public release of its predecessor in November 2022, the use of LLMs has seen a significant spike in interest, and a multitude of potential use cases have been proposed. In parallel, however, important limitations have been outlined. Particularly, current LLM encounters limitations, especially in symbolic representation and accessing contemporary data. The recent version of GPT-4, alongside newly released plugin features, has been introduced to mitigate some of these limitations.
Objective:
Before this background, this work aims to investigate the performance of GPT-3.5, GPT-4, GPT-4 with plugins, and GPT-4 with plugins using pre-translated English text on the German medical board examination. Recognizing the critical importance of quantifying uncertainty for LLM applications in medicine, we furthermore assess this ability and develop a new metric termed 'confidence accuracy' to evaluate it.
Methods:
We employed GPT-3.5, GPT-4, GPT-4 with plugins, and GPT-4 with plugins and translation to answer questions from the German medical board examination. Additionally, we conducted a thorough analysis to assess how the models justify their answers, the accuracy of their responses, and the error structure of their answers. Bootstrapping and confidence intervals were utilized to evaluate the statistical significance of our findings.
Results:
This study demonstrated that available GPT models, as LLM examples, exceeded the minimum competency threshold established by the German medical board for medical students to obtain board certification to practice medicine. Moreover, the models could assess the uncertainty in their responses, albeit exhibiting overconfidence. Additionally, this work unraveled certain justification and reasoning structures that emerge when GPT generates answers.
Conclusions:
The high performance of GPTs in answering medical questions positions it well for applications in academia and, potentially, clinical practice. Its capability to quantify uncertainty in answers suggests it could be a valuable AI agent within the clinical decision-making loop. Nevertheless, significant challenges must be addressed before AI agents can be robustly and safely implemented in the medical domain.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.