Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR mHealth and uHealth

Date Submitted: Aug 14, 2025
Date Accepted: Jan 27, 2026

The final, peer-reviewed published version of this preprint can be found here:

Evaluation of Smartphone Camera Positioning on Artificial Intelligence Pose Estimation Accuracy for Exercise Detection: Observational Study

Oliosi E, Ferreira S, Giordano AP, Viveiros G, Parraca J, Pereira P, Guede-Fernández F, Azevedo S

Evaluation of Smartphone Camera Positioning on Artificial Intelligence Pose Estimation Accuracy for Exercise Detection: Observational Study

JMIR Mhealth Uhealth 2026;14:e82412

DOI: 10.2196/82412

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Evaluation of Smartphone Camera Positioning on Artificial Intelligence Pose Estimation Accuracy for Exercise Detection

  • Eduarda Oliosi; 
  • Soraia Ferreira; 
  • Ana Paula Giordano; 
  • Guilherme Viveiros; 
  • José Parraca; 
  • Paulo Pereira; 
  • Federico Guede-Fernández; 
  • Salomé Azevedo

ABSTRACT

Background:

Artificial Intelligence (AI)-driven pose estimation offers a scalable and cost-effective solution to track exercises in mobile health (mHealth) applications. However, occlusion, influenced by camera angle and distance, can reduce detection accuracy and repetition counting precision. The influence of smartphone positioning on these performance metrics remains underexplored in controlled studies.

Objective:

To examine how smartphone camera angle (front, side, and diagonal) and distance (90 cm, 180 cm, 200 cm, and 360 cm) affect detection performance and repetition counting accuracy during push-ups and squats using AI-based pose estimation.

Methods:

In this cross-sectional, within-subject study, 44 healthy university students (nine female [20.5%]; mean age 20.3 ± 0.4 years; mean BMI 23.2 ± 0.6 kg/m²) were assigned to perform either squats or push-ups. Each participant completed their assigned exercise across 12 predefined smartphone camera configurations, yielding ~264 squat trials (n=22) and 264 push-up trials (n=22). Each trial averaged five repetitions, totaling ~1,320 repetitions per exercise. Pose estimation performance was assessed using binary classification accuracy, detection rate, and mean absolute error (MAE) for repetition counting. Generalized linear mixed-effects models (GLMMs) evaluated classification odds, linear mixed-effects models (LMMs) analyzed MAE, and Tukey-adjusted post hoc tests followed significant effects.

Results:

The mean detection rate was 61.1% (SD 48.8%) for push-ups and 61.5% (SD 48.7%) for squats, with MAEs of 1.08 (SD 1.78) and 1.11 (SD 1.82) repetitions, respectively. Push-ups were most accurately detected from diagonal views at 90–180 cm (up to 85.7% detection; MAE = 0.28) and least accurately from the front at 360 cm (20.0%; MAE = 2.70). Squats performed best from a diagonal view at 200 cm (95.5%; MAE = 0.05) and worst from the side at 90 cm (0.0%; MAE = 5.00). GLMMs showed that for push-ups, the front 90 cm and diagonal 360 cm views significantly reduced classification odds compared to the side 90 cm view (p = 0.032 and p = 0.040, respectively), whereas for squats, diagonal and front views significantly outperformed side views across all distances (p < .001). Post hoc tests confirmed that for push-ups, diagonal close/mid-range views had significantly lower MAEs than far front views, and for squats, diagonal and front views at 180-200 cm achieved the highest accuracy and lowest MAEs (p < .05).

Conclusions:

AI-based pose estimation effectiveness for exercise tracking is significantly affected by smartphone positioning. Diagonal and frontal views at mid-range distances (180–200 cm) provided the highest detection accuracy and counting precision. These findings offer actionable guidance for developers, clinicians, coaches, and users optimizing mHealth exercise monitoring.


 Citation

Please cite as:

Oliosi E, Ferreira S, Giordano AP, Viveiros G, Parraca J, Pereira P, Guede-Fernández F, Azevedo S

Evaluation of Smartphone Camera Positioning on Artificial Intelligence Pose Estimation Accuracy for Exercise Detection: Observational Study

JMIR Mhealth Uhealth 2026;14:e82412

DOI: 10.2196/82412

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.