Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR mHealth and uHealth

Date Submitted: May 21, 2024
Open Peer Review Period: May 27, 2024 - Jul 22, 2024
Date Accepted: Jun 6, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial

Sahoo PK, Chiu SYH, Lin YS, Chen CH, Irianti D, Chen HY, Sarkar M, Liu YC

Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial

JMIR Mhealth Uhealth 2025;13:e60070

DOI: 10.2196/60070

PMID: 40811729

PMCID: 12352700

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Automatic Image Recognition Meal Reporting among Young Adults: A Randomized Controlled Trial

  • Prasan Kumar Sahoo; 
  • Sherry Yueh-Hsia Chiu; 
  • Yu-Sheng Lin; 
  • Chien-Hung Chen; 
  • Denisa Irianti; 
  • Hsin-Yun Chen; 
  • Mekhla Sarkar; 
  • Ying-Chieh Liu

ABSTRACT

Background:

Advances in artificial intelligence (AI) technology have raised new possibilities for the effective evaluation of daily dietary intake, but more empirical study is needed for the use of such technologies under realistic meal scenarios. This study developed an automated food recognition technology, which was then integrated into its previous design to improve usability for meal reporting. The newly developed app allowed for the automatic detection and recognition of multiple dishes within a single real-time food image as input. Application performance was tested using young adults in authentic dining conditions.

Objective:

A two-group comparative study was conducted to assess app performance using metrics including accuracy, efficiency, and user perception. The experimental group, named Automatic Image-based Reporting (AIR) group, was compared against a control group using the previous version, named the Voice Input Reporting (VIR) group. Each application is primarily designed to facilitate a distinct method to food intake reporting. AIR users capture and upload images of their selected dishes, supplemented with voice commands where appropriate. VIR users supplement the uploaded image with verbal inputs for food names and attributes.

Methods:

The two mobile apps were subjected to a head-to-head parallel randomized evaluation. A cohort of 42 young adults aged 20-25 years (9 male and 34 female) was recruited from a university in Taiwan and randomly assigned to two groups, i.e., AIR (n=22) and VIR (n=20). Both groups were assessed using the same menu of seventeen dishes. Each meal was designed to represent a typical lunch or dinner setting, with one stable, one main course, and three side dishes. All participants used the app on the same type of smartphone, with the interfaces of both using uniform user interactions, icons, and layouts. Analysis of the gathered data focused on assessing reporting accuracy, time efficiency, and user perception.

Results:

For the AIR group, 86% dishes were correctly identified, whereas 68% dishes were accurately reported. The AIR group exhibited a significantly higher degree of identification accuracy compared to the VIR group (p<.001). The AIR group also required significantly less time to complete food reporting (p<.001). SUS scores showed both apps were perceived as having high usability and learnability (P=.20).

Conclusions:

The AIR group outperformed the VIR group in terms of accuracy and time efficiency for overall dish reporting within the meal testing scenario. While further technological enhancement may be required, the integration of AI vision technology into existing mobile applications holds promise. Our results provide an evidence-based contributing for the integration of automatic image recognition technology into existing apps in terms of user interaction efficacy and overall ease of use. Further empirical work is required including full-scale randomized controlled trials and assessments of user perception under a range of dining conditions. Clinical Trial: International Standard Randomized Trial Registry ISRCTN27511195; https://doi.org/10.1186/ISRCTN27511195


 Citation

Please cite as:

Sahoo PK, Chiu SYH, Lin YS, Chen CH, Irianti D, Chen HY, Sarkar M, Liu YC

Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial

JMIR Mhealth Uhealth 2025;13:e60070

DOI: 10.2196/60070

PMID: 40811729

PMCID: 12352700

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.