Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR mHealth and uHealth

Date Submitted: Jun 28, 2019
Open Peer Review Period: Jul 2, 2019 - Aug 27, 2019
Date Accepted: Dec 16, 2019
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study

Herzig D, Nakas CT, Stalder J, Kosinski C, Laesser C, Dehais J, Jaeggi R, Leichtle AB, Dahlweid FM, Stettler C, Bally L

Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study

JMIR Mhealth Uhealth 2020;8(3):e15294

DOI: 10.2196/15294

PMID: 32209531

PMCID: 7142738

Automated Quantification of Macronutrients using Computer Vision on a Depth-Sensing Smartphone

  • David Herzig; 
  • Christos T Nakas; 
  • Janine Stalder; 
  • Christophe Kosinski; 
  • Céline Laesser; 
  • Joachim Dehais; 
  • Raphael Jaeggi; 
  • Alexander Benedikt Leichtle; 
  • Fried-Michael Dahlweid; 
  • Christoph Stettler; 
  • Lia Bally

ABSTRACT

Background:

Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and, suffer from lack of accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision has the potential to facilitate reliable quantification of food intake.

Objective:

To evaluate the accuracy of a novel smartphone application combining depth-sensing hardware with computer vision to quantify meal macronutrient content.

Methods:

The application ran on a smartphone with built-in depth sensor applying structured light (iPhone X) and estimated weight, macronutrient (carbohydrate, protein, fat) and energy content of 48 randomly chosen meals (type of meals: breakfast, cooked meals, snacks) encompassing 128 food items. Reference weight was generated by weighing individual food items using a precision scale. The study endpoints were fourfold: i) error of estimated meal weight; ii) error of estimated meal macronutrient content and energy content; iii) segmentation performance; and iv) processing time.

Results:

Mean±SD absolute error of the application’s estimate was 35.1±42.8g (14.0±12.2%) for weight, 5.5±5.1g (14.8±10.9%) for carbohydrate content, 2.4±5.6g (13.0±13.8%), 1.3±1.7g (12.3±12.8%) for fat content and 41.2±42.5kcal (12.7±10.8%) for energy content. While estimation accuracy was not affected by the viewing angle, the type of meal mattered with slightly worse performance for cooked meals compared to breakfast and snack. Segmentation required adjustment for 7 out of 128 items. Mean±SD processing time across all meals was 22.9±8.6s.

Conclusions:

The present study evaluated the accuracy of a novel smartphone application with integrated depth-sensing camera and found a high accuracy in food estimation across all macronutrients. This was paralleled by a high segmentation performance and low processing time corroborating the high usability of this system.


 Citation

Please cite as:

Herzig D, Nakas CT, Stalder J, Kosinski C, Laesser C, Dehais J, Jaeggi R, Leichtle AB, Dahlweid FM, Stettler C, Bally L

Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study

JMIR Mhealth Uhealth 2020;8(3):e15294

DOI: 10.2196/15294

PMID: 32209531

PMCID: 7142738

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.