Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
AI or Human? Message Humanness Predicts Perceiving AI as Human: A Secondary Data Analysis of the HeartBot Study
ABSTRACT
Background:
With the rapid advancement of artificial intelligence (AI) technologies in the healthcare field, AI chatbots are advantageous options for improving knowledge and awareness of diseases and modifying health behaviors for diverse populations. However, people’s understanding of AI chatbots is still developing, and the factors influencing the perception of AI chatbots and human-AI interaction are largely unknown.
Objective:
To identify interaction characteristics related to the perception of believing an AI chatbot as a human versus an artificial agent, controlling for socioeconomic status and past chatbot use in a cohort of diverse women.
Methods:
This was a secondary analysis of data from the HeartBot study in women aged 25 years or older. The goal of the HeartBot was to evaluate the change in awareness of heart disease after interacting with a fully automated AI chatbot. Women were recruited through social media from October 2023 to January 2024. The perceived chatbot identity (human vs. artificial argent), length of the HeartBot conversation, humanness in chatbot messages, perception of chatbot message effectiveness, and attitude toward AI were measured at the post-chatbot survey. Multivariate logistic regression was conducted to explore the factors predicting women’s perception of a chatbot's identity as a human, adjusting for age, race/ethnicity, education, past chatbot use, humanness in chatbot messages, effectiveness of chatbot messages, and attitude towards AI.
Results:
A total of 92 women with a mean age of 45.9 (SD: 11.9, range 26-70) years were analyzed. The chatbot identity was correctly identified by two-thirds (66.3%) of the sample, while one-third (33.7%) incorrectly identified the chatbot as a human. Over half (57.6%) reported having past experiences using a chatbot. Participants interacted with the HeartBot for 13.0 (SD: 7.8) minutes and typed in 82.5 (SD: 61.9) words on average. In the adjusted model, only the score of humanness in chatbot messages was significantly associated with the perception of chatbot identity as a human compared to an artificial argent (adjusted odds ratio 2.37; 95% CI 1.26-4.48; P=.007) controlling for potential confounding factors.
Conclusions:
Our findings suggest as chatbot conversations become increasingly natural and humanlike, clearly communicating the chatbot identity to participants is key to establishing correct perceptions. This study offers valuable theoretical and practical insights for the design of AI chatbots in healthcare, emphasizing the important role of message humanness in influencing human perceptions. Future research is warranted to facilitate an understanding of the relationship between chatbot identity, humanness, and health outcomes.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.