Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Sep 9, 2023
Open Peer Review Period: Sep 9, 2023 - Nov 4, 2023
Date Accepted: Oct 20, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Large Language Models and Empathy: Systematic Review

Sorin V, Brin D, Barash Y, Konen E, Charney A, Nadkarni G, Klang E

Large Language Models and Empathy: Systematic Review

J Med Internet Res 2024;26:e52597

DOI: 10.2196/52597

PMID: 39661968

PMCID: 11669866

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large Language Models (LLMs) and Empathy – A Systematic Review

  • Vera Sorin; 
  • Dana Brin; 
  • Yiftach Barash; 
  • Eli Konen; 
  • Alexander Charney; 
  • Girish Nadkarni; 
  • Eyal Klang

ABSTRACT

Background:

Empathy, a cornerstone of human interaction, is a unique quality to humans that Large Language Models (LLMs) are believed to lack.

Objective:

Our study aims to review the literature on the capacity of LLMs in demonstrating empathy

Methods:

We conducted a literature search on MEDLINE up to July 2023. Included were English language full-length publications that evaluated empathy in LLMs outputs. Excluded were papers evaluating other topics related to emotional intelligence that were not specifically empathy.

Results:

Seven publications ultimately met the inclusion criteria. All studies included in this review were published in 2023. All studies but one focused on ChatGPT-3.5 by OpenAI. Only one study evaluated empathy based on objective metrics, and all others used subjective human assessment. The studies reported LLMs to exhibit elements of empathy, including emotions recognition and emotional support in diverse contexts, most of which were related to healthcare. In some cases, LLMs were observed to outperform humans in empathy-related tasks.

Conclusions:

LLMs demonstrated some aspects of empathy in variable scenarios, mainly related to healthcare. The empathy may be considered “cognitive” empathy. Social skills are a fundamental aspect of intelligence, thus further research is imperative to enhance these skills in AI.


 Citation

Please cite as:

Sorin V, Brin D, Barash Y, Konen E, Charney A, Nadkarni G, Klang E

Large Language Models and Empathy: Systematic Review

J Med Internet Res 2024;26:e52597

DOI: 10.2196/52597

PMID: 39661968

PMCID: 11669866

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.