Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Bioinformatics and Biotechnology

Date Submitted: Jul 16, 2024
Date Accepted: Sep 23, 2024
Date Submitted to PubMed: Sep 25, 2024

The final, peer-reviewed published version of this preprint can be found here:

Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models

Chow JCL, Li K

Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models

JMIR Bioinform Biotech 2024;5:e64406

DOI: 10.2196/64406

PMID: 39321336

PMCID: 11579624

Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots through Large Language Models

  • James C. L. Chow; 
  • Kay Li

ABSTRACT

The integration of chatbots in oncology underscores the pressing need for human-centered AI to address the specific concerns of patients and their families with greater empathy and accuracy. Human-centered AI is defined as artificial intelligence designed with a focus on the human experience, emphasizing ethical principles, empathy, and user-centric approaches to ensure that technology aligns with human values and needs. This review critically explores the ethical implications of employing Large Language Models (LLMs) like GPT-3 and GPT-4 in oncology chatbots for patients. By tracing the evolution of AI from neural networks to advanced LLMs, the paper investigates how these models mimic human speech and behavior, thereby influencing the design of ethical and compassionate AI systems. It identifies key strategies for ethically developing oncology chatbots, focusing on the potential biases arising from extensive datasets and neural networks. The review highlights how the training methodologies of LLMs, including fine-tuning processes, can result in biased outputs. The findings demonstrate that while LLMs excel in understanding and generating human language, they present significant ethical challenges, particularly regarding bias, that may favour certain demographic groups and neglect others. These biases often stem from the inherent biases present in the training data used to train the models, as well as the algorithms' tendencies to perpetuate and amplify these biases through iterative learning processes. Consequently, LLMs may inadvertently favour majority groups in the training data, such as affluent or Western populations, while neglecting minority groups, non-Western cultures, or marginalized communities. The study emphasizes the necessity of integrating human-centric values into AI, providing insights on mitigating bias in LLMs and examining broader implications for AI and oncology. Ultimately, it advocates for aligning AI systems with ethical principles to create human-centered oncology chatbots.


 Citation

Please cite as:

Chow JCL, Li K

Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models

JMIR Bioinform Biotech 2024;5:e64406

DOI: 10.2196/64406

PMID: 39321336

PMCID: 11579624

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.