Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Nov 10, 2020
Date Accepted: Dec 27, 2021
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Tracking Subjective Sleep Quality and Mood with Mobile Sensing: A Multiverse Study
ABSTRACT
Background:
Sleep plays an important role in mood and mood disorder. Existing methods for tracking the quality of people’s sleep are laborious and obtrusive. If a method were available that would allow effortless and unobtrusive tracking of sleep quality, it would mark a significant step forward in obtaining sleep data for research and clinical applications.
Objective:
Our goal was to evaluate the potential of mobile sensing data to obtain information about a person’s sleep quality. For this purpose, we investigated to what extent various automatically-gathered mobile sensing features are capable of predicting (1) subjective sleep quality (SSQ), (2) negative affect (NA), and (3) depression — variables all known to be associated with objective sleep quality. Through a multiverse analysis, we examined how the predictive quality varied as a function of the selected sensor, the extracted feature, various pre-processing options, and the statistical prediction model.
Methods:
We used data from a two-week trial where we collected both mobile sensing and experience sampling data from an initial sample of 60 participants. After data cleaning and removing participants with poor compliance, we retained 50 participants. Mobile sensing data involved the accelerometer, charging status, light sensor, physical activity, screen activity, and Wi-Fi status. Instructions were given to participants to keep their smartphone charged and connected to Wi-Fi at night. We constructed one model for every combination of multiverse parameters to evaluate their effects on each of the outcome variables. We evaluated statistical models by applying them to a training, validation, and test set to prevent overfitting.
Results:
The majority of models (on either of the outcome variables) was not informative on the validation set, i.e. predicted R2 ≤0. However, our best models achieved an R2 — for SSQ, NA, and depression respectively — of 0.658, 0.779, and 0.156 on the training set and an R2 of 0.348, 0.103, and 0.005 on the test set.
Conclusions:
The approach demonstrated in this paper has shown that different choices (e.g. pre-processing choices, various statistical models, different features) lead to vastly different results. Nevertheless, there were some promising results which warrant further research on this topic.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.