Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Mar 17, 2021
Date Accepted: May 3, 2021

The final, peer-reviewed published version of this preprint can be found here:

A Multimodal Imaging–Based Deep Learning Model for Detecting Treatment-Requiring Retinal Vascular Diseases: Model Development and Validation Study

Kang EYC, Yeung L, Lee YL, Wu CH, Peng SY, Chen YP, Gao QZ, Lin CH, Kuo CF, Lai CC

A Multimodal Imaging–Based Deep Learning Model for Detecting Treatment-Requiring Retinal Vascular Diseases: Model Development and Validation Study

JMIR Med Inform 2021;9(5):e28868

DOI: 10.2196/28868

PMID: 34057419

PMCID: 8204240

Multimodal Imaging Based Deep Learning Model for Detecting Treatment-Requiring Retinal Vascular Diseases: Model Development and Validation Study

  • Eugene Yu-Chuan Kang; 
  • Ling Yeung; 
  • Yi-Lun Lee; 
  • Cheng-Hsiu Wu; 
  • Shu-Yen Peng; 
  • Yueh-Peng Chen; 
  • Quan-Ze Gao; 
  • Chi-Hung Lin; 
  • Chang-Fu Kuo; 
  • Chi-Chun Lai

ABSTRACT

Background:

Retinal vascular diseases, including diabetic macular edema (DME), neovascular age-related macular degeneration (nAMD), myopic choroidal neovascularization (mCNV), and branch and central retinal vein occlusion (BRVO/CRVO), are considered vision-threatening eye diseases. However, accurate diagnosis depends on multimodal imaging and the profession of retinal ophthalmologists.

Objective:

To develop a deep learning model to detect treatment-requiring retinal vascular diseases using multimodal imaging.

Methods:

This retrospective study enrolled participants with multimodal ophthalmic imaging data from three hospitals in Taiwan from 2013 to 2019. Eye-related images were used, including those obtained through retinal fundus photography, optical coherence tomography (OCT), and fluorescein angiography with or without indocyanine green angiography (FA/ICGA). A deep learning model was constructed for detecting DME, nAMD, mCNV, BRVO, and CRVO and identifying treatment-requiring diseases. Model performance was evaluated and presented as the area under the curve (AUC) for each receiver operating characteristic curve.

Results:

A total of 2992 eyes of 2185 patients were studied, with 239, 1209, 1008, 211, 189, and 136 eyes in the control, DME, nAMD, mCNV, BRVO, and CRVO groups, respectively. Among them, 1898 eyes required treatment. The eyes were divided into training, validation, and testing groups in a 5:1:1 ratio. In total, 5117 retinal fundi photos, 9316 OCT images, and 20 922 FA/ICGA images were used. The AUCs for detecting mCNV, DME, nAMD, BRVO, and CRVO were 0.996, 0.995, 0.990, 0.959, and 0.988, respectively. The AUC for detecting treatment-requiring diseases was 0.969. From the heat maps, we observed that the model could identify retinal vascular diseases.

Conclusions:

Our study developed a deep learning model to detect retinal diseases using multimodal ophthalmic imaging. Furthermore, the model demonstrated good performance in detecting treatment-requiring retinal diseases.


 Citation

Please cite as:

Kang EYC, Yeung L, Lee YL, Wu CH, Peng SY, Chen YP, Gao QZ, Lin CH, Kuo CF, Lai CC

A Multimodal Imaging–Based Deep Learning Model for Detecting Treatment-Requiring Retinal Vascular Diseases: Model Development and Validation Study

JMIR Med Inform 2021;9(5):e28868

DOI: 10.2196/28868

PMID: 34057419

PMCID: 8204240

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.