Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Pediatrics and Parenting

Date Submitted: Dec 23, 2020
Date Accepted: Jan 3, 2022

The final, peer-reviewed published version of this preprint can be found here:

Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study

Washington P, Kalantarian H, Kent J, Husic A, Kline A, Leblanc E, Hou C, Mutlu C, Dunlap K, Penev Y, Varma M, Stockham N, Chrisman B, Paskov K, Sun MW, Jung JY, Voss C, Haber N, Wall DP

Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study

JMIR Pediatr Parent 2022;5(2):e26760

DOI: 10.2196/26760

PMID: 35394438

PMCID: 9034430

Improved Digital Therapy for Developmental Pediatrics using Domain-Specific Artificial Intelligence: Machine Learning Study

  • Peter Washington; 
  • Haik Kalantarian; 
  • Jack Kent; 
  • Arman Husic; 
  • Aaron Kline; 
  • Emilie Leblanc; 
  • Cathy Hou; 
  • Cezmi Mutlu; 
  • Kaitlyn Dunlap; 
  • Yordan Penev; 
  • Maya Varma; 
  • Nate Stockham; 
  • Brianna Chrisman; 
  • Kelley Paskov; 
  • Min Woo Sun; 
  • Jae-Yoon Jung; 
  • Catalin Voss; 
  • Nick Haber; 
  • Dennis Paul Wall

ABSTRACT

Background:

Automated emotion classification could aid those who struggle to recognize emotion, including children with developmental behavioral conditions such as autism. However, most computer vision emotion recognition models are trained on adult affect and therefore underperform when used on child faces.

Objective:

We designed a strategy to gamify the collection and the labeling of child affect data in an effort to boost the performance of automatic child emotion detection to a level closer to what will be needed for digital healthcare approaches.

Methods:

We leveraged our prototype therapeutic smartphone game, GuessWhat, which was designed in large part for children with developmental and behavioral conditions, to gamify the secure collection of video data of children expressing a variety of emotions prompted by the game. Independently, we created a secure web interface to gamify the human labeling effort HollywoodSquares, tailored for use by any qualified labeler. We gathered and labeled 2,155 videos, 39,968 emotion frames, and 106,001 labels on all images. With this drastically expanded pediatric emotion centric database (>30x larger than existing public pediatric affect datasets), we trained a pediatric emotion classification convolutional neural network (CNN) classifier of happy, sad, surprised, fearful, angry, disgust, and neutral expressions in children.

Results:

The classifier achieved 66.9% balanced accuracy and 67.4% F1-score on the entirety of CAFE as well as 79.1% balanced accuracy and 78.0% F1-score on CAFE Subset A, a subset containing at least 60% human agreement on emotions labels. This performance is at least 10% higher than all previously published classifiers, the best of which reached 56.0% balanced accuracy even when combining “anger” and “disgust” into a single class.

Conclusions:

This work validates that mobile games designed for pediatric therapies can generate high volumes of domain-relevant datasets to train state of the art classifiers to perform tasks highly relevant to precision health efforts.


 Citation

Please cite as:

Washington P, Kalantarian H, Kent J, Husic A, Kline A, Leblanc E, Hou C, Mutlu C, Dunlap K, Penev Y, Varma M, Stockham N, Chrisman B, Paskov K, Sun MW, Jung JY, Voss C, Haber N, Wall DP

Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study

JMIR Pediatr Parent 2022;5(2):e26760

DOI: 10.2196/26760

PMID: 35394438

PMCID: 9034430

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.