Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Nov 27, 2019
Date Accepted: Mar 5, 2020

The final, peer-reviewed published version of this preprint can be found here:

Identification of the Facial Features of Patients With Cancer: A Deep Learning–Based Pilot Study

Liang B

Identification of the Facial Features of Patients With Cancer: A Deep Learning–Based Pilot Study

J Med Internet Res 2020;22(4):e17234

DOI: 10.2196/17234

PMID: 32347802

PMCID: 7221634

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

A study on facial feature of cancer patients

  • Bin Liang

ABSTRACT

Background:

Cancer has become the second leading cause of death globally. Most cancer cases are due to genetic mutations, which in return affects the metabolism and further result in facial changes.

Objective:

In this study, we aim to reveal the facial features of cancer patients using deep learning techniques.

Methods:

8124 face images of cancer patients are collected to build the cancer face image dataset. And the non-cancer face image dataset is built by randomly selected from the public available MegaAge dataset according to the sex and age distribution of cancer face image dataset. The face image is pre-processed to obtain an upright centered face chip, of which the background is filtered out to exclude the effect of non-relative factors. A ResNet is constructed to classify cancer and non-cancer cases. Transfer learning and mini-batch, few epochs, L2 regulation and random dropout training strategies are used to prevent over-fitting. And guided gradient weighted class activation mapping (grad-CAM) is used to reveal the relevant features.

Results:

Both the male and female cancer average faces display more obvious facial adiposity than the non-cancer average faces, which was supported by the landmark comparison. The training process terminates after 5 epochs. The area under receiver operating characteristic (ROC) curve (AUC) is 0.94 and the accuracy rate is 0.82 on testing dataset. The relative features of cancer case are mainly the features of facial skins, while the relative features of non-cancer case are extracted from the complementary face region.

Conclusions:

In this study, we built up a face dataset of cancer patients, and constructed a deep learning model to classify cancer and non-cancer faces. We found that the facial skin and adiposity were closely related with cancer cases.


 Citation

Please cite as:

Liang B

Identification of the Facial Features of Patients With Cancer: A Deep Learning–Based Pilot Study

J Med Internet Res 2020;22(4):e17234

DOI: 10.2196/17234

PMID: 32347802

PMCID: 7221634

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.