Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Nov 14, 2025
Date Accepted: Apr 9, 2026

The final, peer-reviewed published version of this preprint can be found here:

Concerns of Using Large Language Models in Health Care Research and Practice: Umbrella Review

Yarar F, Addis P, Fairweather M, Craig D, O'Keefe H

Concerns of Using Large Language Models in Health Care Research and Practice: Umbrella Review

J Med Internet Res 2026;28:e87804

DOI: 10.2196/87804

Concerns of Using Large Language Models in Healthcare Research and practice: An Umbrella Review

  • Feyza Yarar; 
  • Pauline Addis; 
  • Megan Fairweather; 
  • Dawn Craig; 
  • Hannah O'Keefe

ABSTRACT

Background:

Large language models (LLMs), such as ChatGPT, are rapidly evolving, and their applications in healthcare are increasing. This umbrella review examines concerns related to the use of LLMs in healthcare and healthcare research.

Objective:

To map the concerns associated with the use of LLMs in healthcare and healthcare research.

Methods:

Searches were conducted in seven databases in February 2025. Screening was conducted in two stages, with independent screening by two reviewers. Included studies were quality appraised using AMSTAR-2 and GRADE. Data was extracted using a piloted form and narratively synthesised following SWiM guidelines.

Results:

The search retrieved 381 systematic reviews, of which 39 met the inclusion criteria. Three main themes emerged from the narrative synthesis. In order of most to least frequently discussed: 1) technical capability, 2) ethical, legal & societal and 3) costs. Twelve distinct populations were identified, including researchers and clinicians in various medical specialities. The included reviews were assessed to be of low quality.

Conclusions:

A wide variety of concerns are raised, which overlap and interlink, consistently affecting many populations. It is widely considered that LLMs should be used with caution, and only as a supplementary tool. It can be argued that research into the responsible use of LLMs should be an integral part of healthcare. New areas of research should focus on LLMs other than ChatGPT. A focus on addressing the needs of disadvantaged groups and thoroughly addressing ethical considerations is proposed.


 Citation

Please cite as:

Yarar F, Addis P, Fairweather M, Craig D, O'Keefe H

Concerns of Using Large Language Models in Health Care Research and Practice: Umbrella Review

J Med Internet Res 2026;28:e87804

DOI: 10.2196/87804

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.