Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
Prediction of Sleep Stages Using Smartphone Audio Recordings in Home Environments: Development and Validation
Hai H. Tran;
Jung Kyung Hong;
Hyeryung Jang;
Jinhwan Jung;
Jongmok Kim;
Joonki Hong;
Minji Lee;
Jeong-Whun Kim;
Clete A. Kushida;
Dongheon Lee;
Daewoo Kim;
In-Young Yoon
ABSTRACT
Background:
With a growing interest in sleep monitoring at home, sound-based sleep staging using deep learning has emerged and been validated using in-laboratory sounds. However, validation in noisy home environments has not been carried out despite its importance.
Objective:
To develop and validate a deep learning method to perform sleep staging using smartphone audio recordings in uncontrolled home environments.
Methods:
The training of the model consists of three components: (i) supervised learning using 812 pairs of in-laboratory polysomnography (PSG) and audio recordings, (ii) transfer learning from hospital to home sounds by adding 829 smartphone audio recordings at home, and (iii) consistency training using augmented in-laboratory sound data. Augmented data were created by adding 8,255 home noise data to in-laboratory audio recordings. An independent dataset was built by collecting matched level 2 PSG and smartphone audio recordings at home from 45 individuals to examine the performance of the trained model.
Results:
The accuracy of the model was 76.2% (63.4% for wake, 64.9% for rapid-eye movement (REM), and 83.6% for non-REM). The Macro F1 score and mean per-class sensitivity were 0.714 and 0.706 respectively. The performance was robust across demographic groups such as age, gender, body mass index, or sleep apnea severity (accuracy 73.4-79.4%). Transfer learning and consistency training enhanced the performance by 7% of accuracy.
Conclusions:
This study shows that sound-based sleep staging using smartphones in noisy home environments is feasible. People may easily monitor their sleep at home using their own smartphones without an additional device. Clinical Trial: N/A
Citation
Please cite as:
Tran HH, Hong JK, Jang H, Jung J, Kim J, Hong J, Lee M, Kim JW, Kushida CA, Lee D, Kim D, Yoon IY
Prediction of Sleep Stages Via Deep Learning Using Smartphone Audio Recordings in Home Environments: Model Development and Validation