Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Jul 26, 2023
Date Accepted: Apr 19, 2024

The final, peer-reviewed published version of this preprint can be found here:

Assessing GPT-4’s Performance in Delivering Medical Advice: Comparative Analysis With Human Experts

Jo E, Song S, Kim JH, Kim YM, Joo HJ

Assessing GPT-4’s Performance in Delivering Medical Advice: Comparative Analysis With Human Experts

JMIR Med Educ 2024;10:e51282

DOI: 10.2196/51282

PMID: 38989848

PMCID: 11250047

Assessing GPT-4's Performance in Delivering Medical Advice: A Comparative Analysis with Human Experts

  • Eunbeen Jo; 
  • Sanghoun Song; 
  • Jong-Ho Kim; 
  • Young-Min Kim; 
  • Hyung Joon Joo

ABSTRACT

Background:

Large Language Models (LLMs) like OpenAI's GPT-4 are increasingly utilized in healthcare for automated medical consultation, yet there has been limited investigation into their performance in comparison to human experts.

Objective:

To compare the medical accuracy of GPT-4 with human experts in providing medical advise using real-world user-generated queries.

Methods:

A panel of three independent cardiologists was engaged to evaluate responses from both human experts and GPT-4. A dataset of 251 cardiology-specific question-answer pairs was collected from an Internet portal. The evaluation focused on assessing the medical accuracy of the responses. Additionally, the linguistic evaluation encompassed examining the length and vocabulary diversity of answers provided by GPT-4 compared to human experts.

Results:

Both GPT-4 and human experts demonstrated similar levels of medical accuracy. Rather, the proportion of low-accuracy answers was higher among human experts (0.4% vs. 4.6%). GPT-4's responses were generally longer and utilized less diverse vocabulary, potentially enhancing their comprehensibility for general users. Nevertheless, human experts outperformed GPT-4 in specific question categories, notably those related to drug/medication information and preliminary diagnoses. These findings highlight the limitations of GPT-4 in providing advice based on clinical experience.

Conclusions:

GPT-4 has shown promising potential in automated medical consultation, with comparable medical accuracy to human experts. However, challenges remain, particularly in the realm of nuanced clinical judgement. Future improvements in LLMs may require the integration of specific clinical reasoning pathways and regulatory oversight for safe use. Further research is needed to understand the full potential of LLMs across various medical specialties and conditions.


 Citation

Please cite as:

Jo E, Song S, Kim JH, Kim YM, Joo HJ

Assessing GPT-4’s Performance in Delivering Medical Advice: Comparative Analysis With Human Experts

JMIR Med Educ 2024;10:e51282

DOI: 10.2196/51282

PMID: 38989848

PMCID: 11250047

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.