Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Apr 11, 2025
Date Accepted: May 26, 2025
Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians
ABSTRACT
The integration of Large Language Models (LLMs) into healthcare presents transformative opportunities alongside significant risks, necessitating a proactive approach to balance innovation with patient safety. This paper advocates for a harm reduction framework to address the dual-edged nature of LLM use in medicine, where patients and clinicians face distinct vulnerabilities. For patients, risks include misinformation, privacy breaches, biased outputs, and delayed care due to over-reliance on unverified LLM-generated advice. For clinicians, challenges encompass diagnostic errors, liability concerns, workflow disruptions, and the erosion of clinical judgment. Rather than advocating prohibition, this work proposes evidence-based strategies to mitigate harm while maximizing utility. Key recommendations include: fostering critical health literacy and verification habits among patients; enforcing institutional guidelines for clinicians that prioritize "human-in-the-loop" validation; enhancing transparency in LLM outputs and data practices; and implementing secure, bias-mitigated AI tools tailored to medical contexts. The paper underscores the necessity of thoughtful use—emphasizing LLMs as assistive, never autonomous, tools—and highlights collaborative efforts among developers, policymakers, healthcare institutions, and educators to address challenges like regulatory ambiguity, equity gaps, and rapid technological evolution. By embedding accountability, transparency, and continuous oversight, this framework aims to steer the ethical integration of LLMs toward enhancing—not undermining—core medical values: patient safety, trust, and equitable care delivery.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.