Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 10, 2023
Date Accepted: May 25, 2023
Investigating the Impact of User Trust on Adoption and Use of ChatGPT: A Survey Analysis
ABSTRACT
Background:
ChatGPT, a language model developed by OpenAI, has gained immense popularity among computer users owing to its remarkable ability to generate responses that closely resemble those of human language. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underutilization, resulting in missed opportunities.
Objective:
This study investigates the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: first, that users' intent to use ChatGPT increases with their trust in the technology; second, that the actual use of ChatGPT increases with users' intent to use the technology; third, that the actual use of ChatGPT increases with users' trust in the technology; and fourth, that users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. By examining these hypotheses, this study provides insights into the factors that influence users' adoption of chatbot technologies and highlights the role of trust in this process.
Methods:
This study obtained ethical approval and distributed an online survey to adults in the US who actively use ChatGPT 3.5 at least once a month to examine the relationship between trust, intent to use, and actual use of ChatGPT for healthcare queries. The survey was soft launched with 40 responses and then distributed to a larger audience, collecting data from February 2023 through March 2023. Participants responded to survey questions on a Likert scale, and two latent constructs were developed: Trust and Intent to Use, with Actual Use being the outcome variable. Descriptive statistics of study variables were calculated. The study used seminr package and partial least squares structural equation modeling for multivariate analysis to evaluate and validate the latent constructs' convergent and discriminant validity and test the structural model and hypotheses.
Results:
In the study, 607 respondents completed the survey. Among them, 182 (30%) used ChatGPT at least once a month, with 158 (26%) using it once per week, 149 (25%) using it more than once per week, and 118 (19%) using it almost every day. Most respondents had a minimum high school diploma (n=204, 34%) or a bachelor's degree (n=262, 43%). The primary uses of ChatGPT were for information gathering (n=219, 36%), entertainment (n=203, 33%), and problem-solving (n=135, 22%), with a smaller number using it for health-related queries (n=44, 7%) and other activities (n=6, 1%). The model explained 50.5% and 9.8% of the variance in "Intent to Use" and "Actual Use," respectively, with path coefficients of 0.711 and 0.221 for "Trust" on "Intent to Use" and "Actual Use," respectively. The bootstrapped results failed to reject all four null hypotheses, with trust having a significant direct effect on both intentions to use (ß = 0.711, 95% CI [0.656, 0.764]) and actual use (ß = 0.302, 95% CI [0.229, 0.374]). The indirect effect of trust on actual use, partially mediated by intention to use, was also significant (ß = 0.113, 95% CI [0.001, 0.227]).
Conclusions:
The study provides novel insights into the factors driving the adoption of chatbot technologies such as ChatGPT. Our results suggest that trust is critical to users' adoption of ChatGPT. Companies and policymakers should prioritize building trust and transparency in developing and deploying chatbots. While risks are associated with excessive trust in AI-driven chatbots like ChatGPT, it is important to recognize that the potential risks can be reduced by advocating shared accountability and fostering collaboration between developers, subject matter experts, and human factors professionals. A systematic collaborative approach can ensure that AI-driven chatbots are designed and deployed with a comprehensive understanding of user needs and potential challenges. By addressing the risks associated with excessive trust and actively improving the chatbot's performance, the development and application of AI-driven technologies like ChatGPT can continue advancing, promoting positive outcomes and responsible usage in various domains.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.