Currently submitted to: Transfer Hub (manuscript eXchange)
Date Submitted: Dec 22, 2025
Open Peer Review Period: Dec 22, 2025 - Feb 16, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Artificial Intelligence Prediction of Individual Treatment Response to Smartphone-Based Mindfulness in Autistic Adults with Anxiety Symptoms: Randomized Controlled Trial Analysis
ABSTRACT
Background:
Anxiety disorders are highly prevalent among autistic adults, with 20%-65% experiencing at least one diagnosable anxiety disorder. While mindfulness-based interventions have demonstrated efficacy for anxiety reduction, treatment response varies considerably across individuals. Machine learning approaches offer potential for identifying who is most likely to benefit from smartphone-based mindfulness interventions, enabling more personalized treatment recommendations.
Objective:
This study aimed to develop and evaluate machine learning models to predict individual treatment response, in the form of reduced anxiety symptoms, to a smartphone-based mindfulness intervention for autistic adults. We sought to identify baseline characteristics that distinguish responders from non-responders, explore few-shot learning with large language models as a complementary approach for low-data clinical prediction, and implement a Personalized Advantage Index approach for individualized treatment recommendations.
Methods:
We conducted a secondary analysis of data from a randomized controlled trial comparing a 6-week smartphone-based mindfulness intervention (Healthy Minds Program) with a waitlist control condition in autistic adults. Among 73 participants who completed the intervention, we defined responders as those achieving ≥7-point reduction in State-Trait Anxiety Inventory state anxiety scores. Baseline predictors included demographic variables, autism trait measures, and self-report questionnaires assessing anxiety symptoms, perceived stress, affect, and mindfulness. We trained six machine learning models (logistic regression, Random Forest, XGBoost, TabNet, Tab-ICL, and TabPFN) using nested 10-fold cross-validation with inner 5-fold cross-validation for hyperparameter tuning. Additionally, we evaluated few-shot learning using GPT-4o models with tokenized baseline features at varying shot counts (20-70 examples). Model performance was evaluated using area under the receiver operating characteristic curve (AUC) for machine learning model and classification accuracy for few-shot learning. We examined feature importance and implemented Personalized Advantage Index analysis to estimate individualized treatment benefit.
Results:
Random Forest achieved the highest predictive performance for state anxiety response (AUC 0.79, 95% CI 0.66-0.91), followed by TabPFN (AUC 0.78, 95% CI 0.64-0.94) and logistic regression (AUC 0.77, 95% CI 0.73-0.81). Higher baseline state anxiety (coefficient 1.20, P<.001) predicted better treatment response, while higher trait anxiety (coefficient -0.17, P=.001), older age (coefficient -0.18, P=.02), and lower childhood pretend play scores (coefficient -0.93, P=.007) were associated with poorer response. Few-shot learning with 7-feature tokenization achieved accuracy of 0.867 (95% CI 0.81-0.92) at 70 shots, significantly outperforming Random Forest baseline (0.733, p<.001). Prediction of trait anxiety changes was substantially weaker (AUCs 0.57-0.68), likely reflecting the inherent stability of this personality dimension. The Personalized Advantage Index demonstrated significant moderation of treatment group differences (adjusted R²=0.29), with 75% of participants predicted to benefit more from the mindfulness intervention than the waitlist control.
Conclusions:
Machine learning models successfully identified baseline characteristics predicting treatment response to a smartphone-based mindfulness intervention in autistic adults. Few-shot learning with large language models demonstrated superior performance to traditional machine learning when provided with compact, high-signal feature representations, offering a promising approach for clinical prediction in small-sample settings. These findings demonstrate the feasibility of precision psychiatry approaches in digital mental health interventions for autistic adults. While modest sample size and limited demographic diversity warrant cautious interpretation, the stable cross-validation performance suggests robust predictive patterns within similar populations. Future research should validate these models in larger, more diverse samples and explore whether algorithm-guided treatment recommendations improve outcomes compared to standard care, through prospective randomized trials.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.