Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 24, 2024
Open Peer Review Period: Apr 1, 2024 - May 27, 2024
Date Accepted: Nov 25, 2024
(closed for review but you can still tweet)
Identification of Intracranial Germ Cell Tumors Based on Facial Photo: Software Development Using Deep Learning Technology
ABSTRACT
Background:
Primary intracranial germ cell tumors(iGCTs) are highly malignant brain tumors that predominantly occur in children and adolescents, with an incidence rate ranking third among primary brain tumors in East Asia(8-15%). Due to their insidious onset and impact on critical functional areas of the brain, these tumors often result in irreversible abnormalities in growth and development, as well as cognitive and motor impairments in affected children. Therefore, early diagnosis through advanced screening techniques is vital for improving patient outcomes and quality of life.
Objective:
This study investigates the application of facial recognition technology in the early detection of intracranial germ cell tumors (iGCTs) in children and adolescents. Early diagnosis through advanced screening techniques is vital for improving patient outcomes and quality of life.
Methods:
A multi-center, phased approach was adopted for the development and validation of a deep learning model, GVisageNet, dedicated to the screening of midline brain tumors from normal controls (NCs) and iGCTs from other midline brain tumors. The study comprised the collection and division of datasets into training (n=847, iGCTs=358, NCs=300, other midline brain tumors=189) and testing (n=212, iGCTs=79, NCs=70, other midline brain tumors=63), with an additional independent validation dataset (n=336, iGCTs=130, NCs=100, other midline brain tumors=106) sourced from four medical institutions. A regression model using clinically relevant, statistically significant data was developed and combined with GVisageNet outputs to create a hybrid model. This integration sought to assess the incremental value of clinical data. The model's predictive mechanisms were explored through correlation analyses with endocrine indicators and stratified evaluations based on the degree of hypothalamic-pituitary-target (HPT) axis damage. Performance metrics included area under the curve (AUC), accuracy, sensitivity, and specificity.
Results:
On the independent validation dataset, GVisageNet achieved an AUC of 0.938 (P<0.01) in distinguishing midline brain tumors from NCs. Further, GVisageNet demonstrated significant diagnostic capability in distinguishing iGCTs from the other midline brain tumors, achieving an AUC of 0.739, which is superior to the regression model alone (AUC=0.632, P<.001) but less than the hybrid model (AUC=0.789, P=.04). Significant correlations were found between the GVisageNet's outputs and seven endocrine indicators. Performance varied with HPT axis damage, indicating a further understanding of the working mechanism of GVisageNet.
Conclusions:
GVisageNet, capable of high accuracy both independently and with clinical data, shows substantial potential for early iGCTs detection, highlighting the importance of combining deep learning with clinical insights for personalized healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.