Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Jan 19, 2024
Open Peer Review Period: Jan 22, 2024 - Mar 18, 2024
Date Accepted: Apr 27, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis

Ferrario A, Sedlakova J, Trachsel M

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis

JMIR Ment Health 2024;11:e56569

DOI: 10.2196/56569

PMID: 38958218

PMCID: 11231450

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals with Depression: A Critical Analysis

  • Andrea Ferrario; 
  • Jana Sedlakova; 
  • Manuel Trachsel

ABSTRACT

Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and question answering. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this work, we discuss two challenges that affect the utilization of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of depressed patients: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate “human-like” features with LLMs and what role these systems should have in interactions with humans. Further, to ensure contextualizing robustness of LLMs requires considering the specificities of language production in depressed individuals, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.


 Citation

Please cite as:

Ferrario A, Sedlakova J, Trachsel M

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis

JMIR Ment Health 2024;11:e56569

DOI: 10.2196/56569

PMID: 38958218

PMCID: 11231450

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.