Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Oct 21, 2020
Date Accepted: Jun 22, 2021

The final, peer-reviewed published version of this preprint can be found here:

Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study

Betzler BK, Hee HSY, Thakur S, Yu M, Quek TC, Soh ZD, Lee G, Tham YC, Wong TY, Rim TH, Cheng CY

Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study

JMIR Med Inform 2021;9(8):e25165

DOI: 10.2196/25165

PMID: 34402800

PMCID: 8408758

Gender prediction via deep learning across different retinal fundus photograph fields: a multi-ethnic study

  • Bjorn Kaijun Betzler; 
  • Henrik Seung Yang Hee; 
  • Sahil Thakur; 
  • Marco Yu; 
  • Ten Cheer Quek; 
  • Zhi Da Soh; 
  • Geunyoung Lee; 
  • Yih-Chung Tham; 
  • Tien Yin Wong; 
  • Tyler Hyungtaek Rim; 
  • Ching-Yu Cheng

ABSTRACT

Background:

Deep Learning (DL) algorithms have been built for detection of systemic and eye diseases from retinal photographs. The retina possesses features which can be affected by gender differences, and the extent to which these features are captured upon photography differs depending on the retinal image field.

Objective:

To compare DL algorithms’ performance in predicting gender when using different fields of retinal photographs (disc-centered, macula-centered, peripheral).

Methods:

This retrospective cross-sectional study included 172,170 retinal photographs from 9956 adults aged ≥ 40 years from the Singapore Epidemiology of Eye Diseases (SEED) Study. Optic disc-centered, macula-centered and peripheral field retinal fundus images were included in this study as input to a DL model for gender prediction. Performance was estimated at individual level and image level. Receiver operating characteristic (ROC) curves for binary classification were calculated.

Results:

The DL algorithms predicted gender with area under the ROC (AUC) of 0.94 at individual-level and AUC of 0.87 at image-level. Across the three image fields, the best performance was seen in disc-centered (AUC: 0.91 in younger and 0.86 in older age subgroups), and peripheral field images showed the lowest performance (AUC: 0.85 in younger and 0.76 in older subgroups). Between the three ethnic subgroups, performance was lowest in the Indian subgroup (AUC: 0.88) compared to Malay (AUC: 0.91) and Chinese (AUC: 0.91) when tested on disc-centered images. The performance of gender prediction at the image level was better in younger age subgroups of < 65 years (AUC: 0.89) than in older age subgroups of ≥ 65 years (AUC: 0.82).

Conclusions:

We confirmed that gender can be predicted from retinal photographs using DL in Asian population, and the performance of gender prediction differ according to field of retinal photographs, age-subgroups, and ethnic groups. Our work provides a further understanding of using DL models for prediction of gender-related diseases. Further validation of our findings is still needed.


 Citation

Please cite as:

Betzler BK, Hee HSY, Thakur S, Yu M, Quek TC, Soh ZD, Lee G, Tham YC, Wong TY, Rim TH, Cheng CY

Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study

JMIR Med Inform 2021;9(8):e25165

DOI: 10.2196/25165

PMID: 34402800

PMCID: 8408758

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.