Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Nov 12, 2025
Date Accepted: Mar 18, 2026

The final, peer-reviewed published version of this preprint can be found here:

A Proposed Participatory Framework for Explainable AI in mHealth: Mixed Methods Study Integrating User and Stakeholder Requirements

Islam F, Islam A, Amin MA, Zaber M

A Proposed Participatory Framework for Explainable AI in mHealth: Mixed Methods Study Integrating User and Stakeholder Requirements

J Med Internet Res 2026;28:e87158

DOI: 10.2196/87158

PMID: 42081827

A Proposed Participatory Framework for Explainable AI in mHealth: Integrating User and Stakeholder Requirements

  • Farzana Islam; 
  • Ashraful Islam; 
  • M. Ashraful Amin; 
  • Moinul Zaber

ABSTRACT

Background:

The growing integration of Artificial Intelligence (AI) into mobile health (mHealth) applications offers new opportunities to improve healthcare access, particularly in low-resource settings. Yet, the opaque nature of AI-generated recommendations poses challenges of transparency, trust, and cultural relevance. These issues are often overlooked by existing Explainable AI (XAI) frameworks designed in high-resource contexts.

Objective:

This study aims to develop a human-centered design framework for explainability in mHealth applications adapted to the Bangladeshi context, addressing gaps in transparency, trust, and cultural relevance.

Methods:

A mixed-method approach was employed, combining quantitative surveys (n=137) with qualitative interviews and focus groups (n=20) involving end-users, developers, clinicians, and XAI experts. Survey responses were analyzed descriptively, and qualitative data were analyzed using thematic analysis to identify explainability expectations, trust perceptions, and design priorities.

Results:

Trust perceptions varied significantly across education and age groups among 137 surveyed users. Younger users (18-24 years) with lower formal education rated apps highest in perceived trust and clarity (mean Likert score: 5.0), while postgraduate degree holders (31-40 years) rated explainability lower (mean: 3.25), and users aged 40+ expressed strong skepticism (mean: 2.0). Despite recognizing AI's utility for preliminary guidance, users emphasized the necessity of human validation and expressed concerns about understanding AI's decision-making logic. Interviews with 6 XAI experts, 4 developers, and 10 medical professionals revealed critical gaps. Developers acknowledged minimal explainability implementation in current mHealth apps, while medical professionals unanimously prioritized clinical judgment over automated outputs and advocated for physician-mediated AI systems. Synthesizing findings across all stakeholder groups revealed five core requirements: (1) Human–AI collaboration and clinical validation, (2) Transparent logic paths, (3) Contextual personalization, (4) Cultural and linguistic relevance, and (5) Trust calibration with ethical safeguards.

Conclusions:

The framework bridges stakeholder misalignments and offers actionable guidance for design, deployment, and policy alignment in resource-constrained environments. By situating explainability within the sociocultural realities of the Global South, this research advances XAI beyond algorithmic transparency toward equity, inclusion, and user empowerment in digital health. Clinical Trial: N/A


 Citation

Please cite as:

Islam F, Islam A, Amin MA, Zaber M

A Proposed Participatory Framework for Explainable AI in mHealth: Mixed Methods Study Integrating User and Stakeholder Requirements

J Med Internet Res 2026;28:e87158

DOI: 10.2196/87158

PMID: 42081827

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.