Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 3, 2024
Date Accepted: Jan 19, 2025

The final, peer-reviewed published version of this preprint can be found here:

Ability of ChatGPT to Replace Doctors in Patient Education: Cross-Sectional Comparative Analysis of Inflammatory Bowel Disease

Yan Z, Liu J, Lu S, Xu D, Yang Y, Wang H, Mao J, Tseng HC, Chang TH, Chen Y, Fan Y

Ability of ChatGPT to Replace Doctors in Patient Education: Cross-Sectional Comparative Analysis of Inflammatory Bowel Disease

J Med Internet Res 2025;27:e62857

DOI: 10.2196/62857

PMID: 40163853

PMCID: 11997527

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large Language Models (LLMs) vs. Specialist Doctors: A Comparative Study on Health InformationĀ in specific medical domains.

  • Zelin Yan; 
  • Jingwen Liu; 
  • Shiyuan Lu; 
  • Dingting Xu; 
  • Yun Yang; 
  • Honggang Wang; 
  • Jie Mao; 
  • Hou-Chiang Tseng; 
  • Tao-Hsing Chang; 
  • Yan Chen; 
  • Yihong Fan

ABSTRACT

Background:

Although Large Language Models (LLMs) such as ChatGPT show promise in providing specialized information, their quality require further evaluation, especially considering that these models are trained on internet text and the quality of health-related information available online varies widely.

Objective:

The aim of this study was to evaluate the performance of ChatGPT in the context of patient education for individuals with chronic diseases, comparing it with that of industry experts to elucidate its strengths and limitations.

Methods:

This evaluation was conducted by analyzing the responses of ChatGPT and specialist doctors to questions posed by patients with Inflammatory Bowel Disease (IBD), comparing their performance in terms of subjective accuracy, empathy, completeness, and overall quality, as well as readability to support objective analysis.

Results:

In a series of 1578 binary choice assessments, ChatGPT was preferred in 48.4% (95% CI, 45.9%-50.9%) of instances. There were 12 instances where ChatGPT's responses were unanimously preferred by all evaluators, compared to 17 instances for specialist doctors. In terms of overall quality, there was no significant difference between the responses of ChatGPT (3.98; 95% CI, 3.93-4.02) and those of specialist doctors (3.95; 95% CI, 3.90-4.00) (t=0.95, p=0.34), both being considered "good". Although differences in accuracy (t=0.48, p=0.63) and empathy (t=2.19, p=0.03) lacked statistical significance, the completeness of textual output (t=9.27, p=0.00) was a distinct advantage of the Large Language Model (ChatGPT). In the sections of the question where patients and doctors responded together (Q223-Q242), ChatGPT demonstrated superior performance (p=0.006). Regarding readability, no statistical difference was found between the responses of specialist doctors (median: 7th grade, Q1: 4th grade; Q3: 8th grade) and those of ChatGPT (median: 7th grade, Q1: 7th grade; Q3: 8th grade) according to the Mann-Whitney U test (p=0.09). The overall quality of ChatGPT's output exhibited strong correlations with other sub-dimensions (with empathy: r=0.842; with accuracy: r=0.839; with completeness: r=0.795), and there was also a high correlation between the sub-dimensions of accuracy and completeness (r=0.762).

Conclusions:

ChatGPT demonstrated more stable performance across various dimensions. Its output of health information content is more structurally sound, addressing the issue of variability in individual specialist doctors' information output. ChatGPT's performance highlights its potential as an auxiliary tool for health information, despite limitations such as AI hallucinations. It is recommended that patients be involved in the creation and evaluation of health information to enhance the quality and relevance of the information.


 Citation

Please cite as:

Yan Z, Liu J, Lu S, Xu D, Yang Y, Wang H, Mao J, Tseng HC, Chang TH, Chen Y, Fan Y

Ability of ChatGPT to Replace Doctors in Patient Education: Cross-Sectional Comparative Analysis of Inflammatory Bowel Disease

J Med Internet Res 2025;27:e62857

DOI: 10.2196/62857

PMID: 40163853

PMCID: 11997527

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.