Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Apr 11, 2025
Date Accepted: May 26, 2025

The final, peer-reviewed published version of this preprint can be found here:

Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians

Moell B, Sand Aronsson F

Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians

J Med Internet Res 2025;27:e75849

DOI: 10.2196/75849

PMID: 40712151

PMCID: 12296254

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians

  • Birger Moell; 
  • Fredrik Sand Aronsson

ABSTRACT

The integration of Large Language Models (LLMs) into healthcare presents transformative opportunities alongside significant risks, necessitating a proactive approach to balance innovation with patient safety. This paper advocates for a harm reduction framework to address the dual-edged nature of LLM use in medicine, where patients and clinicians face distinct vulnerabilities. For patients, risks include misinformation, privacy breaches, biased outputs, and delayed care due to over-reliance on unverified LLM-generated advice. For clinicians, challenges encompass diagnostic errors, liability concerns, workflow disruptions, and the erosion of clinical judgment. Rather than advocating prohibition, this work proposes evidence-based strategies to mitigate harm while maximizing utility. Key recommendations include: fostering critical health literacy and verification habits among patients; enforcing institutional guidelines for clinicians that prioritize "human-in-the-loop" validation; enhancing transparency in LLM outputs and data practices; and implementing secure, bias-mitigated AI tools tailored to medical contexts. The paper underscores the necessity of thoughtful use—emphasizing LLMs as assistive, never autonomous, tools—and highlights collaborative efforts among developers, policymakers, healthcare institutions, and educators to address challenges like regulatory ambiguity, equity gaps, and rapid technological evolution. By embedding accountability, transparency, and continuous oversight, this framework aims to steer the ethical integration of LLMs toward enhancing—not undermining—core medical values: patient safety, trust, and equitable care delivery.


 Citation

Please cite as:

Moell B, Sand Aronsson F

Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians

J Med Internet Res 2025;27:e75849

DOI: 10.2196/75849

PMID: 40712151

PMCID: 12296254

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.