Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Feb 17, 2020
Open Peer Review Period: Feb 17, 2020 - Mar 23, 2020
Date Accepted: Apr 19, 2020
Date Submitted to PubMed: May 27, 2020
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Detecting Screams From Home Audio Recordings to Identify Tantrums: Exploratory Study Using Transfer Machine Learning

O'Donovan R, Sezgin E, Bambach S, Butter E, Lin S

Detecting Screams From Home Audio Recordings to Identify Tantrums: Exploratory Study Using Transfer Machine Learning

JMIR Form Res 2020;4(6):e18279

DOI: 10.2196/18279

PMID: 32459656

PMCID: 7327591

Detecting screams from home audio recordings to identify tantrums: a feasibility study using transfer machine learning

  • Rebecca O'Donovan; 
  • Emre Sezgin; 
  • Sven Bambach; 
  • Eric Butter; 
  • Simon Lin

ABSTRACT

Background:

Qualitative self- or parent-reports used in assessing children’s behavioral disorders are often inconvenient to collect and can be misleading due to missing information. A data-driven approach to quantify behavioral disorder could alleviate these concerns. This study proposes a machine learning approach to identify screams in voice recordings that avoids the need to gather large amounts of clinical data for model training.

Objective:

The goal of this study is to evaluate if a machine learning model trained only on publicly available audio datasets can be used to detect screaming sounds in audio streams captured in an at-home setting.

Methods:

Two sets of audio samples were prepared to evaluate the model: a subset of the publicly available AudioSet dataset, and a set of audio data extracted from the TV show Supernanny, which was chosen for its similarity to clinical data. Scream events were manually annotated for the Supernanny data and existing annotations were refined for the AudioSet data. Audio feature extraction was performed with a convolutional neural network pre-trained on AudioSet. A gradient-boosted tree model was trained and cross-validated for scream classification on the AudioSet data and then validated independently on the Supernanny audio.

Results:

On the held-out AudioSet clips, the model achieved an ROC-AUC of 0.86. The same model applied to three full episodes of Supernanny audio achieves an ROC-AUC of 0.95 and an average precision (positive predictive value) of 42% despite screams only making up 1.3% of the total runtime.

Conclusions:

These results suggest that a scream-detection model trained with publicly available data could be valuable for monitoring clinical recordings and identifying tantrums, as opposed to depending on collecting costly privacy-protected clinical data for model training.


 Citation

Please cite as:

O'Donovan R, Sezgin E, Bambach S, Butter E, Lin S

Detecting Screams From Home Audio Recordings to Identify Tantrums: Exploratory Study Using Transfer Machine Learning

JMIR Form Res 2020;4(6):e18279

DOI: 10.2196/18279

PMID: 32459656

PMCID: 7327591

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.