Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Apr 2, 2023
Open Peer Review Period: Apr 2, 2023 - May 28, 2023
Date Accepted: Aug 13, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

Jo E, Yoo H, Kim JH, Kim YM, Song S, Joo HJ

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

JMIR Form Res 2024;8:e47814

DOI: 10.2196/47814

PMID: 39423004

PMCID: 11530716

Fine-Tuned BERT vs ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

  • Eunbeen Jo; 
  • Hakje Yoo; 
  • Jong-Ho Kim; 
  • Young-Min Kim; 
  • Sanghoun Song; 
  • Hyung Joon Joo

ABSTRACT

Background:

Patients often struggle with determining which outpatient specialist to consult based on their symptoms. Natural language processing models in healthcare offer the potential to assist patients in making these decisions before visiting a hospital.

Objective:

This study aims to evaluate the performance of ChatGPT in recommending medical specialties for medical questions.

Methods:

We utilized a dataset of 31,482 medical questions, each answered by doctors and labeled with the appropriate medical specialty from the health consultation board of NAVER, a major Korean portal. This dataset includes 27 distinct medical specialty labels. We compared the performance of the fine-tuned KM-BERT and ChatGPT models by analyzing their ability to accurately recommend medical specialties. We categorized responses from ChatGPT into those matching the 27 predefined specialties and those that did not. Both models were evaluated using performance metrics of accuracy, precision, recall, and F1-score.

Results:

ChatGPT demonstrated an answer avoidance rate of 6.2% but provided accurate medical specialty recommendations with explanations that elucidated the underlying pathophysiology of the patient’s symptoms. It achieved an accuracy of 0.939, precision of 0.219, recall of 0.168, and an F1-score of 0.134. In contrast, the KM-BERT model, fine-tuned for the same task, outperformed ChatGPT with an accuracy of 0.977, precision of 0.570, recall of 0.652, and an F1-score of 0.587.

Conclusions:

Although ChatGPT did not surpass the fine-tuned KM-BERT model in recommending the correct medical specialties, it showcased notable advantages as a conversational AI model. By providing detailed, contextually appropriate explanations, ChatGPT has the potential to significantly enhance patient comprehension of medical information, thereby improving the medical referral process.


 Citation

Please cite as:

Jo E, Yoo H, Kim JH, Kim YM, Song S, Joo HJ

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

JMIR Form Res 2024;8:e47814

DOI: 10.2196/47814

PMID: 39423004

PMCID: 11530716

The author of this paper has made a PDF available, but requires the user to login, or create an account.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.