Accepted for/Published in: JMIR Mental Health
Date Submitted: May 28, 2024
Open Peer Review Period: May 31, 2024 - Jul 26, 2024
Date Accepted: Aug 20, 2024
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Empathy Towards AI vs Human Experiences: The Role of Transparency in Mental Health and Social Support Chatbot Design
ABSTRACT
Background:
Empathy is a driving force in our connection to others, our mental wellbeing, and resilience to challenges. With the rise of generative AI systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds towards stories from human vs AI narrators and how user emotions might change when the author of a story is made transparent to users.
Objective:
We aim to understand how empathy shifts across human-written vs AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.
Methods:
We conduct crowd-sourced studies with N=985 participants who each write a personal story and then rate empathy towards 2 retrieved stories, where one is written by a language model, and another is written by a human. Our studies vary transparency around whether a story is written by a human or an AI to see how transparency affects empathy towards the narrator. We conduct mixed-methods analyses with both quantitative and qualitative approaches to understand how and why transparency affects empathy towards human vs AI storytellers.
Results:
We find that participants consistently and significantly empathize with human-written over machine-written stories in almost all conditions, regardless of whether they are aware that an AI wrote the story (P<.001). We also find that participants reported a greater willingness to empathize with AI-written stories if there is transparency about the story author (P<.001).
Conclusions:
Our work sheds light on how empathy towards AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of artificial social support or mental health chatbots that are intended to evoke empathetic reactions.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.