Currently submitted to: JMIR Formative Research
Date Submitted: Mar 13, 2026
Open Peer Review Period: Apr 10, 2026 - Jun 5, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Clinician perceptions of artificial intelligence in care: A mixed methods study
ABSTRACT
Background:
AI is increasingly being explored as a tool to enhance efficiency, access, and diagnostic accuracy in mental health care. However, the perspectives on the use of AI from clinicians, who are central to the delivery and oversight of care, remain underexamined.
Objective:
This study aimed to explore clinicians' perceptions of AI in clinical care, including perceived benefits, risks, barriers to implementation, and training needs.
Methods:
A cross-sectional mixed-methods survey was distributed from August to November 2024 to mental health professionals in the US. Quantitative data were analyzed using descriptive statistics, while open-ended responses were analyzed thematically to identify key insights.
Results:
Respondents (n=62) were mostly not currently using AI in practice. The most frequently endorsed benefits included reductions in clinical workload, improved efficiency, and enhanced data analysis. However, a majority expressed discomfort using AI in patient care, and concerns were raised about inaccurate outputs, algorithmic bias, privacy, and weakened therapeutic rapport. Barriers to adoption included clinician resistance, lack of validation, and challenges with technical integration. Most respondents believed that specialized training in AI ethics and applications is important for clinicians. Qualitative findings reinforced concerns about dehumanization, cultural insensitivity, ethical accountability, and insufficient technological literacy.
Conclusions:
Mental health professionals view AI as a potentially useful adjunct, but not a replacement, in care. Ethical concerns, limited trust, and a strong emphasis on the human dimensions of therapy suggest that implementation must proceed with caution. Clinician-informed strategies, ethical frameworks, and targeted training are essential to support the responsible and effective integration of AI into mental health practice.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.