Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Human Factors

Date Submitted: Apr 22, 2025
Date Accepted: Aug 12, 2025
Date Submitted to PubMed: Aug 18, 2025

The final, peer-reviewed published version of this preprint can be found here:

Acceptability of a Conversational Agent–Led Digital Program for Anxiety: Mixed Methods Study of User Perspectives

Papiernik P, Dzula S, Zimanyi M, Millgate E, Bouazzaoui M, Buttimer J, Warren G, Cooper E, Catarino A, Mehew S, Marshall E, Tablan V, Blackwell AD, Palmer CE

Acceptability of a Conversational Agent–Led Digital Program for Anxiety: Mixed Methods Study of User Perspectives

JMIR Hum Factors 2025;12:e76377

DOI: 10.2196/76377

PMID: 40824528

PMCID: 12627969

Acceptability of a Conversational Agent-led Digital Program for Anxiety: A Mixed-Methods Study of User Perspectives

  • Pearla Papiernik; 
  • Sylwia Dzula; 
  • Marta Zimanyi; 
  • Edward Millgate; 
  • Malika Bouazzaoui; 
  • Jessica Buttimer; 
  • Graham Warren; 
  • Elisa Cooper; 
  • Ana Catarino; 
  • Shaun Mehew; 
  • Emily Marshall; 
  • Valentin Tablan; 
  • Andrew D Blackwell; 
  • Clare E Palmer

ABSTRACT

Background:

Prevalence rates for anxiety and depression are increasing globally, outpacing the capacity of traditional mental health services. Digital Mental Health Interventions (DMHIs) offer a cost-effective solution, but user engagement is poor. Integrating AI-powered conversational agents could enhance engagement and the user experience, though AI technology is rapidly evolving, and the acceptability of these solutions remains uncertain.

Objective:

This study aims to understand the acceptability, engagement and usability of a conversational agent-led DMHI with human support for generalized anxiety by exploring patient expectations and experiences using mixed methods.

Methods:

Participants (N=299) were offered a DMHI for up to 9 weeks and completed self-report validated measures of engagement (User Engagement Scale, UES, N=190), usability (System Usability Scale, SUS, N=203) and acceptability (Service User Technology Acceptability Questionnaire, SUTAQ, N=203) post-intervention. To explore patients’ expectations and experiences with the digital program, a sub-sample of participants completed qualitative semi-structured interviews before the intervention (N=21) and after the intervention (N=16), analyzed using inductive Thematic Analysis.

Results:

Participants found the digital program engaging (mean UES total score = 3.7, 95%CI [3.5,3.8]), rewarding (mean UES rewarding subscale = 4.1; 95%CI [4.0-4.2]) and easy to use (SUS total score = 78.6, 95%CI [76.5, 80.7]). Participants were satisfied with the program and found it increased access to and enhanced their care (mean SUTAQ subscales = 4.3-4.9, 95% CI [4.1-5.1]). Insights from both pre and post-intervention qualitative interviews highlighted four key themes important for the acceptability of this digital program: 1) easy access to practical and effective solutions leading to tangible mental health improvements (“Accessing Effective Solutions”); 2) a personalized and tailored experience (“Personal Experience”); 3) being guided with a clear structure yet control over their journey (“Guided but in Control”); 4) fostering a sense of support facilitated by humans (“Feeling Supported”). Overall, the DMHI met expectations for theme 1, 3 and 4 yet participants wanted more personalization and felt frustrated when the conversational agent misunderstood them.

Conclusions:

Incorporating factors important for patient acceptability into DMHIs is essential to maximize their global impact on mental healthcare. This study provides quantitative and qualitative evidence for the acceptability of a structured, conversational agent-driven digital program with human support for adults with generalized anxiety. Findings emphasize the role of design, clinical and implementation factors in enhancing engagement, highlighting opportunities for continued optimization and innovation. Scalable models with stratified human support and the safe integration of generative AI are poised to transform patient experience and enhance the real-world impact of conversational agent-led DMHIs. Clinical Trial: The study was pre-registered (ISRCTN ID: 52546704)


 Citation

Please cite as:

Papiernik P, Dzula S, Zimanyi M, Millgate E, Bouazzaoui M, Buttimer J, Warren G, Cooper E, Catarino A, Mehew S, Marshall E, Tablan V, Blackwell AD, Palmer CE

Acceptability of a Conversational Agent–Led Digital Program for Anxiety: Mixed Methods Study of User Perspectives

JMIR Hum Factors 2025;12:e76377

DOI: 10.2196/76377

PMID: 40824528

PMCID: 12627969

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.