Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Jul 31, 2025
Date Accepted: Mar 26, 2026

The final, peer-reviewed published version of this preprint can be found here:

Use of Commercially Available Large Language Models to Generate Information Leaflets on Post–Intensive Care Syndrome: Clinical Utility Assessment

Hata N, Oami T, Kawakami E, Hanai A, Nakada Ta

Use of Commercially Available Large Language Models to Generate Information Leaflets on Post–Intensive Care Syndrome: Clinical Utility Assessment

JMIR Form Res 2026;10:e81606

DOI: 10.2196/81606

PMID: 42133866

Use of Commercially Available Large Language Models to Generate Information Leaflets on Post-Intensive Care Syndrome: A Clinical Utility Assessment

  • Nanami Hata; 
  • Takehiko Oami; 
  • Eiryo Kawakami; 
  • Akiko Hanai; 
  • Taka-aki Nakada

ABSTRACT

Background:

Patients and their families without medical knowledge, may find professional healthcare information difficult to understand. The use of large language models (LLMs) to simplify and translate complex medical content holds promise for improving comprehension, while simultaneously reducing the burden on healthcare providers tasked with delivering explanations.

Objective:

This study aims to evaluate the quality of information leaflets generated using commercially available LLMs.

Methods:

Informational texts on post-intensive care syndrome (PICS) were generated using a combination of 6 different LLMs and 4 distinct prompt engineering and retrieval-augmented generation (RAG) strategies. Ten individuals, including healthcare professionals and non-medical personnel, assessed the texts based on readability, accuracy, and other attributes on a 10-point Likert scale. Additionally, LLM was used to perform a parallel assessment. The qualitative scores were then compared across the different LLMs and evaluators.

Results:

The generated texts achieved an average score of 6.8 or higher across all evaluation criteria, without potentially harmful content. The text generated by LLaMA 3 70B, using a step-by-step approach combined with RAG based on clinical guidelines, received the highest average evaluation score. By contrast, the lowest-rated text was produced using a simple prompt without RAG. Although no consistent trends were observed across LLMs or prompt engineering strategies, the use of RAG was typically associated with higher evaluation scores. Notably, the ratings differed between professional and nonprofessional evaluators. Assessments conducted by the LLM did not demonstrate consistency as those conducted by human evaluators.

Conclusions:

Informational texts generated using LLMs received acceptable evaluations from human evaluators with no indication of harmful content. The use of commercially available LLMs can contribute to the creation of high-quality information leaflets for patients and their families.


 Citation

Please cite as:

Hata N, Oami T, Kawakami E, Hanai A, Nakada Ta

Use of Commercially Available Large Language Models to Generate Information Leaflets on Post–Intensive Care Syndrome: Clinical Utility Assessment

JMIR Form Res 2026;10:e81606

DOI: 10.2196/81606

PMID: 42133866

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.