Accepted for/Published in: JMIR Bioinformatics and Biotechnology
Date Submitted: Jul 16, 2024
Date Accepted: Sep 23, 2024
Date Submitted to PubMed: Sep 25, 2024
Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots through Large Language Models
ABSTRACT
The integration of chatbots in oncology underscores the pressing need for human-centered AI to address the specific concerns of patients and their families with greater empathy and accuracy. Human-centered AI is defined as artificial intelligence designed with a focus on the human experience, emphasizing ethical principles, empathy, and user-centric approaches to ensure that technology aligns with human values and needs. This review critically explores the ethical implications of employing Large Language Models (LLMs) like GPT-3 and GPT-4 in oncology chatbots for patients. By tracing the evolution of AI from neural networks to advanced LLMs, the paper investigates how these models mimic human speech and behavior, thereby influencing the design of ethical and compassionate AI systems. It identifies key strategies for ethically developing oncology chatbots, focusing on the potential biases arising from extensive datasets and neural networks. The review highlights how the training methodologies of LLMs, including fine-tuning processes, can result in biased outputs. The findings demonstrate that while LLMs excel in understanding and generating human language, they present significant ethical challenges, particularly regarding bias, that may favour certain demographic groups and neglect others. These biases often stem from the inherent biases present in the training data used to train the models, as well as the algorithms' tendencies to perpetuate and amplify these biases through iterative learning processes. Consequently, LLMs may inadvertently favour majority groups in the training data, such as affluent or Western populations, while neglecting minority groups, non-Western cultures, or marginalized communities. The study emphasizes the necessity of integrating human-centric values into AI, providing insights on mitigating bias in LLMs and examining broader implications for AI and oncology. Ultimately, it advocates for aligning AI systems with ethical principles to create human-centered oncology chatbots.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.