Accepted for/Published in: JMIR Human Factors
Date Submitted: Mar 24, 2023
Date Accepted: May 9, 2023
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Role of AI Chatbots in Healthcare: A Study on User Intentions to Utilize ChatGPT for Self-Diagnosis
ABSTRACT
Background:
With the rapid advancement of artificial intelligence (AI) technologies, AI-powered chatbots like ChatGPT have emerged as potential tools for various applications, including healthcare. However, ChatGPT is not specifically designed for healthcare purposes, and its use for self-diagnosis raises concerns regarding the potential risks and benefits associated with its adoption. There is a growing inclination among users to employ ChatGPT for self-diagnosis, necessitating a deeper understanding of the factors driving this trend.
Objective:
This study aims to investigate the factors influencing users' decision-making processes and intentions to use ChatGPT for self-diagnosis and to explore the implications of these findings for the safe and effective integration of AI chatbots in healthcare.
Methods:
A cross-sectional survey design was employed, and data were collected from 607 participants. The relationships between performance expectancy, risk-reward appraisal, decision-making, and intention to use ChatGPT for self-diagnosis were analyzed using partial least squares structural equation modeling (PLS-SEM).
Results:
Most respondents were willing to use ChatGPT for self-diagnosis (n=476). The model demonstrated satisfactory explanatory power, accounting for 52.4% of the variance in decision-making and 38.1% in the intent to use ChatGPT for self-diagnosis. The results supported all three hypotheses: higher performance expectancy of ChatGPT (β = 0.547, 95% CI [0.474, 0.620]) and positive risk-reward appraisals (β = 0.245, 95% CI [0.161, 0.325]) were positively associated with improved decision-making outcomes among users, and enhanced decision-making processes involving ChatGPT positively impacted users' intentions to utilize the technology for self-diagnosis (β = 0.565, 95% CI [0.498, 0.628]).
Conclusions:
Our findings underscore that users are prone to use ChatGPT for self-diagnosis, emphasizing the importance of considering users' performance expectancy, risk-reward appraisals, and decision-making processes when addressing this issue. These insights can inform the development of more effective, reliable, and user-centric AI-powered chatbot applications in healthcare, as well as shape policy decisions to mitigate potential risks and ensure the safe integration of AI technologies in healthcare settings. Moreover, our study offers valuable implications for fostering responsible AI adoption, promoting user education, and guiding future research to explore AI chatbots' role in healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.