Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jul 22, 2020
Date Accepted: Nov 12, 2020
Date Submitted to PubMed: Nov 19, 2020

The final, peer-reviewed published version of this preprint can be found here:

De-Identification of Facial Features in Magnetic Resonance Images: Software Development Using Deep Learning Technology

Jeong YU, Yoo S, Kim YH, Shim WH

De-Identification of Facial Features in Magnetic Resonance Images: Software Development Using Deep Learning Technology

J Med Internet Res 2020;22(12):e22739

DOI: 10.2196/22739

PMID: 33208302

PMCID: 7759440

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Anonymizing Facial Features in Magnetic Resonance Images Using Deep-Leaning Technology

  • Yeon Uk Jeong; 
  • Soyoung Yoo; 
  • Young-Hak Kim; 
  • Woo Hyun Shim

ABSTRACT

Background:

High-resolution medical images that include facial regions can be used to recognize the subject’s face when reconstructing 3D-rendered images from 2D sequential images, which might constitute the risk of infringement of personal information when sharing data. According to the HIPAA privacy rules, full-face photographic images and any comparable image are direct identifiers and considered as protected health information. Moreover, GDPR categorizes facial images as biometric data and stipulates that special restrictions should be placed on the processing of biometric data.

Objective:

Develop software that can remove DICOM headers and facial features (eyes, nose, and ears) at the 2D sliced-image level to anonymize personal information in medical images.

Methods:

A total of 240 cranial MR images were used for training the deep-learning model (200, 20, and 20 for the training, validation, and test sets, respectively, from the ADNI database). To overcome the small sample size problem, we used a data augmentation technique to create 1,000 images per epoch. We used attention-gated U-net for the basic structure of our deep-learning model. To validate the performance of the software, we adapted an external test set comprising 100 cranial MR images from the OASIS database.

Results:

The facial features (eyes, nose, and ears) were successfully detected and anonymized in both test sets (20 from ADNI and 100 from OASIS). Each result was manually validated in both the 2D image plane and the 3D rendered images. By adding a user interface, we developed and distributed (via GitHub) software named ‘Deface program’ for medical images as an open-source project.

Conclusions:

We developed deep-learning based software for the anonymization of MR images that distorts the eyes, nose, and ears to prevent facial identification of the subject in reconstructed 3D images. It could be used to share medical big data for secondary research while making both data providers and recipients compliant with the relevant privacy regulations.


 Citation

Please cite as:

Jeong YU, Yoo S, Kim YH, Shim WH

De-Identification of Facial Features in Magnetic Resonance Images: Software Development Using Deep Learning Technology

J Med Internet Res 2020;22(12):e22739

DOI: 10.2196/22739

PMID: 33208302

PMCID: 7759440

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.