Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Aug 17, 2023
Open Peer Review Period: Aug 17, 2023 - Oct 12, 2023
Date Accepted: Nov 30, 2023
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study

Liu X, Wu J, Shao A, Shen W, Ye P, Wang Y, Ye J, Jin K, Yang J

Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study

J Med Internet Res 2024;26:e51926

DOI: 10.2196/51926

PMID: 38252483

PMCID: 10845019

Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study

  • Xiaocong Liu; 
  • Jiageng Wu; 
  • An Shao; 
  • Wenyue Shen; 
  • Panpan Ye; 
  • Yao Wang; 
  • Juan Ye; 
  • Kai Jin; 
  • Jie Yang

ABSTRACT

Background:

Benefiting from the exceptional ability of text understanding and rich knowledge, large language models (LLMs) like ChatGPT, have shown great potential in English clinical environments. However, the performance of ChatGPT in non-English clinical settings, as well as its reasoning, have not been explored in-depth.

Objective:

To evaluate ChatGPT’s diagnostic performance and inference abilities for retinal vascular diseases in a non-English clinical environment.

Methods:

In this cross-sectional study, we collected 1226 fundus fluorescein angiography reports and corresponding diagnosis written in Chinese, and tested ChatGPT with four prompting strategies (direct diagnosis or diagnosis with explanation and in Chinese or English).

Results:

ChatGPT using English prompt for direct diagnosis achieved the best performance, with F1-score of 80.05%, which was inferior to ophthalmologists (89.35%) but close to ophthalmologist interns (82.69%). Although ChatGPT can derive reasoning process with a low error rate, mistakes such as misinformation (1.96%), and hallucination (0.59%) still exist.

Conclusions:

ChatGPT can serve as a helpful medical assistant to provide diagnosis under non-English clinical environments, but there are still performance gaps, language disparity, and errors compared to professionals, which demonstrates the potential limitations and the desiration to continually explore more robust LLMs in ophthalmology practice. Clinical Trial: ClinicalTrials.gov NCT04718532


 Citation

Please cite as:

Liu X, Wu J, Shao A, Shen W, Ye P, Wang Y, Ye J, Jin K, Yang J

Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study

J Med Internet Res 2024;26:e51926

DOI: 10.2196/51926

PMID: 38252483

PMCID: 10845019

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.