Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Oct 13, 2019
Date Accepted: Mar 19, 2020
Date Submitted to PubMed: Mar 19, 2020
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Autoencoder - a new method for keeping data privacy when analyzing videos of patients with motor dysfunction – a proof of concept study
ABSTRACT
Background:
In chronical neurological diseases, especially in multiple sclerosis (MS), clinical assessment of motor dysfunction is crucial to monitor patient’s disease. Traditional scales such as the Expanded Disability Status Scale (EDSS) are not sensitive enough to detect slight changes in motor performance. Video recordings of patient performance are more accurate and increase reliability in severity ratings. They allow an automated, quantitative, machine learning algorithms-based analysis of patient’s motor performance. Creation of these algorithms usually involves non-healthcare professionals, which is a challenge regarding data-privacy. Autoencoders embed visual information into a lower-dimensional latent space which preserves information needed for algorithm development but is not visually interpretable by humans. They consist of an encoder that creates encodes videos (creating a sequence of coded frame vectors) and a paired decoder that transforms the coded frame vectors into the original video. Videos encoded in this way can be shared with non-medical collaborators.
Objective:
The aim of this proof of concept study was to test whether coded frame vectors of autoencoders contain relevant information to analyse videos of patient movements whilst preserving data privacy.
Methods:
In this study, 20 pre-rated videos of patients performing the finger-to-nose test were recorded. An autoencoder created encoded frame vectors from the original videos and decoded the videos again. Original and decoded videos were shown to 10 neurologists of an academic MS centre in Basel, Switzerland. Neurologists tested whether these 200 videos in total were human-readable and rated the severity grade of each video according to the Neurostatus-EDSS definitions of limb ataxia. Furthermore, the neurologists tested whether ratings were equivalent between original and decoded videos.
Results:
From 200 presented videos, and after decoding of the video data, 172 (86%) were evidently rated. The intra-rater agreement between the original and decoded videos was 0.317 (kappa = Cohen’s weighted kappa), with an average difference of 0.26 (original is rated as more severe). The inter-rater agreement before coding of the videos was 0.459 (kappa) and 0.302 (kappa) after decoding.
Conclusions:
Although larger studies and a further improvement of the autoencoder are needed, this proof of concept study is a first step for a promising method that enables the use of patient videos while preserving data privacy, especially when non-healthcare professionals are involved. Our findings emphasise that autoencoder provides a similar level of security to normal encryption - assuming that the decoder is not shared – especially for the use of automated machine-learning algorithm-based analysis of patient videos.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.