Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Feb 3, 2020
Date Accepted: Apr 30, 2020
Date Submitted to PubMed: May 27, 2020
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Automatic recognition, segmentation and sex assignment of nocturnal asthmatic cough and cough epochs in smartphone-based audio recordings: Results from an observational field study
ABSTRACT
Background:
Asthma is one of the most prevalent chronic respiratory diseases. Despite increased investment in treatment, little progress has been made in the early recognition and treatment of exacerbations over the last decade. Nocturnal cough monitoring may provide an opportunity to identify patients at risk for imminent exacerbations. Recently developed approaches enable smartphone-based cough monitoring. These approaches, however, have not undergone longitudinal overnight testing nor have they been specifically evaluated in the context of asthma. Also, the problem of distinguishing partner cough from patient cough when two or more people are sleeping in the same room in contact-free audio recordings remains unsolved.
Objective:
The objective of this study was to evaluate the automatic recognition and segmentation of nocturnal asthmatic cough and cough epochs in smartphone-based audio recordings collected in the field. We also aimed to distinguish partner cough from patient cough in contact-free audio recordings by assigning cough to sex.
Methods:
We used a convolutional neural network (CNN), which we developed in prior work, for cough recognition. We further used techniques (e.g., ensemble learning, mini-batch balancing, and thresholding) to address the imbalance in the dataset. We evaluated the classifier in a classification task and a segmentation task. The cough recognition served as the basis for the cough segmentation from continuous audio recordings. We compared automated cough and cough-epoch counts to humanly annotated cough and cough-epoch counts. We employed Gaussian mixture models (GMM) to build a classifier for the assignment of cough and cough-epoch signals to sex.
Results:
We recorded audio data from 94 adults with asthma (57% females, 43 years old on average) during 29 nights with smartphones lying next to the bed in their everyday environment. Out of 704,697 sounds, we identified 30,304 sounds as cough. A total of 26,166 coughs occurred without a 2-s pause between coughs, yielding 8,238 cough epochs. The ensemble classifier performed well with a Matthews Correlation Coefficient of 92% in a pure classification task and achieved comparable cough counts to human annotators in the segmentation of coughing overnight. Mean difference between automated and observer cough counts was -0.1 coughs. Mean difference between automated and observer cough-epoch counts was 0.24 cough epochs. The GMM cough-epoch-based sex assignment performed best yielding an accuracy of 83%.
Conclusions:
Our study showed longitudinal nocturnal cough and cough-epoch recognition from smartphone-based audio recordings in the everyday nights of adults with asthma. It contributes to the distinguishing of partner cough from patient cough in contact-free recordings by assigning cough and cough-epoch signals to the corresponding sex of the patient. This research represents a step towards enabling passive scalable cough monitoring for adults with asthma. Clinical Trial: Trial registration number NCT03635710.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.