Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Design and Requirements Analysis of a Conversational Agent for Mental Health: A Qualitative Study Based on Interviews with Patients and Therapists
ABSTRACT
Background:
The prevalence of common mental health problems, such as depression and anxiety, represents one of the leading causes of years lived with disability. However, a substantial gap remains in access to traditional treatment. Artificial intelligence–based conversational systems represent an alternative to provide emotional support and psychoeducation. Nevertheless, their design raises relevant ethical, clinical, and technological challenges, which require rigorous user-centered software engineering approaches.
Objective:
To design and specify the functional and non-functional requirements for the development of a conversational agent focused on improving mental health.
Methods:
Our design is qualitative. Data collection was conducted through semi-structured interviews with patients and therapists, aimed at exploring perceptions regarding the potential use of a conversational agent in mental health. The collected information was analyzed and systematized from a requirements engineering perspective, enabling the definition of roles, functional and non-functional requirements, use cases, and Unified Modeling Language (UML) models. In addition, ethical considerations and clinical safety aspects were incorporated as cross-cutting design constraints.
Results:
The analysis enabled the identification of a structured set of functional requirements differentiated by role (Patient and Therapist), oriented toward basic emotional support, psychoeducation, monitoring, and the activation of crisis workflows, as well as non-functional requirements related to usability, privacy, informed consent, safety, reliability, and prevention of dependency. These elements were integrated into a preliminary modular architecture and UML models describing the expected system behavior, establishing a traceable technical specification aligned with good software engineering practices.
Conclusions:
The findings allow the identification of the basic functionalities and requirements that future users need to interact with a conversational agent for mental health. This work provides a framework applicable to the development of conversational solutions in sensitive domains, integrating principles from software engineering, mental health, and ethics in artificial intelligence.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.