Currently submitted to: Journal of Medical Internet Research
Date Submitted: May 14, 2026
Open Peer Review Period: May 14, 2026 - Jul 9, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Artificial Intelligence Avatars for Emotional Regulation and Anxiety Management Among University Students: Mixed Methods Survey Study
ABSTRACT
Background:
Conversational artificial intelligence (AI) avatars are emerging as possible tools for scalable mental health support, but their acceptability, perceived empathy, usability, and privacy implications remain insufficiently understood.
Objective:
This study aimed to examine university students' attitudes toward conversational AI avatars for mental health support and to evaluate perceived usability, empathy, satisfaction, and barriers after a brief avatar interaction.
Methods:
We conducted a two-phase mixed methods survey study with 102 university students. Phase 1 assessed attitudes toward AI-based mental health support using an online questionnaire. In Phase 2, a volunteer subset of 16 participants completed a 10-minute interaction with a three-dimensional avatar using cognitive behavioral therapy (CBT)-informed dialogue protocols and then completed a post interaction evaluation. Quantitative responses were summarized using descriptive statistics, and open-ended responses were examined using descriptive thematic analysis.
Results:
Among participants with valid item-level responses, 76.2% agreed that AI could help some people, and 59.4% reported that they would use AI therapy if it were free. However, only 23.8% believed that an AI therapist could genuinely understand their emotions, and 55.0% preferred talking to a real person rather than an AI system. In the interactive subset, 11 of 16 participants (68.8%) reported being moderately satisfied, although 11 of 16 participants (68.8%) still preferred an in-person therapist when given the choice. Qualitative feedback highlighted privacy, nonjudgmental support, effective communication, and practical advice as perceived strengths, whereas emotional depth, speech naturalness, and interaction pacing were identified as areas for improvement.
Conclusions:
Findings suggest that AI avatars may be acceptable as preliminary support, psychoeducation, or triage tools, but they should not be framed as replacements for human clinicians. Improving emotional nuance, voice quality, response pacing, and transparent data governance will be essential before broader deployment in university mental health settings.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.