Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Feb 20, 2023
Open Peer Review Period: Feb 20, 2023 - Apr 17, 2023
Date Accepted: Apr 11, 2023
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Automated Diet Capture Using Voice Alerts and Speech Recognition on Smartphones: Pilot Usability and Acceptability Study

Chikwetu L, Daily S, Mortazavi BJ, Dunn J

Automated Diet Capture Using Voice Alerts and Speech Recognition on Smartphones: Pilot Usability and Acceptability Study

JMIR Form Res 2023;7:e46659

DOI: 10.2196/46659

PMID: 37191989

PMCID: 10230351

Automated diet capture using voice alerts and speech recognition on smartphones: A pilot study

  • Lucy Chikwetu; 
  • Shaundra Daily; 
  • Bobak J. Mortazavi; 
  • Jessilyn Dunn

ABSTRACT

Background:

Automated diet capture is critical for supporting health through lifestyle monitoring and preventing or delaying the onset/progression of diet-related diseases such as type 2 diabetes. Advances in speech recognition technologies and natural language processing (NLP) present new possibilities for automated diet capture; however, the usability and acceptability of such technologies for food logging remain unclear.

Objective:

This study explores the usability and acceptability of speech recognition technologies and NLP for automated food logging.

Methods:

We designed and developed base2Diet—an iOS smartphone application that prompts users to log their food intake using voice or text. We conducted a two-arm, two-phase, 28-day pilot study to compare text- and voice-based methods of diet capture (text: N=9, voice: N=9). In phase I, all participants (N=18) received reminders at pre-selected breakfast, lunch, and dinner times. At the start of phase II, all participants chose three times during the day to receive 3X daily reminders to log their food intake for the remainder of the phase, with the ability to change the selected notification times.

Results:

The total number of distinct food logging events per participant was 1.7 times higher in the voice arm than in the text arm (P=.03, unpaired t-test), and the total active days per participant was 1.5 times higher in the voice arm than in the text arm (P=.04, unpaired t-test). Additionally, attrition was higher in the text arm as compared with the voice arm, with only one participant dropping out of the study in the voice arm and five in the text arm.

Conclusions:

The results of this pilot study show that voice technologies carry substantial promise in automated diet capturing using smartphones.


 Citation

Please cite as:

Chikwetu L, Daily S, Mortazavi BJ, Dunn J

Automated Diet Capture Using Voice Alerts and Speech Recognition on Smartphones: Pilot Usability and Acceptability Study

JMIR Form Res 2023;7:e46659

DOI: 10.2196/46659

PMID: 37191989

PMCID: 10230351

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.