Accepted for/Published in: JMIR Medical Education
Date Submitted: Apr 17, 2023
Date Accepted: Aug 14, 2023
Date Submitted to PubMed: Aug 14, 2023
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Validation of a Technology Acceptance Model-Based Scale TAME-ChatGPT on Health Students Attitudes and Usage of ChatGPT in Jordan
ABSTRACT
Background:
ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. It is necessary to assess the usability and acceptability of this promising tool. As an innovative technology, the intention to use ChatGPT can be studied in the context of the Technology Acceptance Model (TAM).
Objective:
To develop and validate a TAM-based survey instrument that could be employed to examine the successful integration and use of ChatGPT in healthcare education.
Methods:
The survey tool was created based on the TAM farmwork and the tool comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for the participants who used ChatGPT. Using a convenient sampling approach, the survey link was circulated electronically among university students during February–March 2023. Exploratory factor analysis (EFA) was used to assess the construct validity of the survey instrument.
Results:
The final sample comprised 458 respondents with a median age of 20 years and a majority of undergraduate students (n=442, 96.5%). Only 109 respondents (23.9%) heard of ChatGPT prior to participation and only 55 self-reported ChatGPT use before the study (11.3%). The EFA showed that three constructs explained a cumulative total of 69.3% variance in the attitude scale and these subscales represented (1) perceived risks, (2) attitude to technology/social influence, and (3) anxiety. For the ChatGPT usage scale, EFA showed that four constructs explained a cumulative total of 72.0% variance in the data and comprised the following: (1) perceived usefulness, (2) perceived risks, (3) perceived ease of use, and (4) behavior/cognitive factors. All of the ChatGPT attitude and usage sub-scales showed good reliability with Cronbach alpha values >0.78 for all the deduced sub-scales.
Conclusions:
The TAME-ChatGPT demonstrated good reliability, validity, and usefulness in assessing the attitudes towards ChatGPT among healthcare students. The findings highlighted the importance of considering risk perceptions, usefulness, ease of use, attitudes towards technology, and behavioral factors when adopting ChatGPT as a tool in healthcare education. This information can aid AI developers, academics, and policymakers in creating strategies to support optimal and ethical use of ChatGPT and to identify the potential challenges hindering its successful implementation. Future research is recommended to guide the effective adoption of ChatGPT in healthcare education.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.