Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Aug 18, 2025
Date Accepted: Jan 30, 2026
Feasibility and Acceptability of AI-Powered Tools for Early Autism Screening: A Qualitative Study from Egypt
ABSTRACT
Background:
Autism spectrum disorder (ASD) is often underdiagnosed in low- and middle-income countries due to limited specialist access, sociocultural stigma, and fragmented screening systems. Artificial intelligence (AI)-powered screening tools may improve early detection by enabling scalable, low-cost, and accessible assessments. However, adoption depends on stakeholder trust, ethical safeguards, and alignment with local health system capacities.
Objective:
This study explored the feasibility, acceptability, and perceived barriers to implementing AI-powered ASD screening tools in Egypt, with attention to urban rural disparities, ethical considerations, and integration into existing care pathways.
Methods:
A qualitative design was employed using semi-structured focus group discussions with 46 participants (parents of children with ASD and healthcare professionals) recruited from urban and rural governorates. Discussions were audio-recorded, transcribed verbatim, and analyzed using Braun and Clarke’s reflexive thematic analysis approach, supported by NVivo software. Methodological integrity was ensured through reflexivity, triangulation, and peer debriefing. In addition, thematic saturation was monitored across groups to ensure comprehensive coverage of perspectives, and participant diversity was prioritized to capture variations across geographic and socioeconomic contexts.
Results:
Five overarching themes emerged: (1) AI as a supportive tool rather than a replacement for clinicians, emphasizing scalability and assistance for non-specialists; (2) the need for cultural and contextual adaptation to ensure local relevance; (3) privacy, trust, and transparency concerns, including data security, consent, and algorithmic opacity; (4) reducing diagnostic inequities through addressing urban–rural disparities and strengthening community-based deployment; and (5) the preference for hybrid AI–human models, with conditions for adoption including cultural sensitivity, human oversight, and digital literacy support. Percentages of participants mentioning each theme reflected salience during interviews rather than statistical agreement. Participants expressed cautious optimism, with parents emphasizing accessibility and speed, while healthcare professionals highlighted concerns about reliability, cultural adaptation, and data governance.
Conclusions:
AI-powered ASD screening has strong potential to advance equitable early detection, particularly in underserved areas. Adoption requires transparent data governance, integration into hybrid human–AI models, culturally adaptive design, and targeted digital literacy initiatives. These findings provide an evidence-based roadmap for policymakers, technologists, and health system leaders to implement AI screening tools that are ethically sound, contextually relevant, and equity-focused.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.