Accepted for/Published in: JMIR Human Factors
Date Submitted: Dec 11, 2023
Date Accepted: Apr 7, 2024
The Impact of Performance Expectancy, Workload, Risk, and Satisfaction on Trust in ChatGPT: Cross-sectional Survey Analysis
ABSTRACT
Background:
Chat Generative Pre-Trained Transformer (ChatGPT) (OpenAI) is a powerful tool for a wide range of tasks, from entertainment, creativity, to healthcare queries. There are potential risks and benefits associated with this technology. In the discourse concerning the deployment of ChatGPT and similar large language models, it is prudent to recommend their use primarily for tasks a human user can execute accurately. As we transition into the subsequent phase of ChatGPT deployment, establishing realistic performance expectations and understanding users’ perceptions of risk associated with its use is crucial in determining the successful integration of this AI technology.
Objective:
To explore how perceived workload, satisfaction, performance expectancy, and risk-benefit perception influence users' trust in ChatGPT.
Methods:
A semi-structured, web-based survey was conducted with 607 adults in the United States who actively use ChatGPT. The survey questions were adapted from constructs used in various models and theories such as the TAM, the Theory of Planned Behavior (TPB), UTAUT, and research on trust and security in online environments. To test our hypotheses and structural model, we utilized the PLS-SEM method, a widely used approach for multivariate analysis.
Results:
Six hundred and seven people responded to our survey. A significant portion of the participants held at least a high school diploma (34%, n=204), and the majority had a bachelor's degree (43%, n=262). The primary motivations for participants to use ChatGPT were for acquiring information (36%, n=219), amusement (33%, n=203), and addressing problems (22%, n=135). Some participants used it for health-related inquiries (7%, n=44), while a few others (1%, n=6) utilized it for miscellaneous activities such as brainstorming, grammar verification, and blog content creation. Our model explained 64.6% of the variance in Trust. Our analysis indicated a significant relationship between (a) workload and satisfaction, (b) trust and satisfaction, (c) performance expectations and trust, and (d) risk-to-benefit ratio and trust.
Conclusions:
The findings underscore the importance of ensuring user-friendly design and functionality in AI-based applications to reduce workload and enhance user satisfaction, thereby increasing user trust. Future research should further explore the relationship between the benefit-to-risk ratio and trust in the context of AI chatbots.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.