Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Dec 11, 2024
Date Accepted: Apr 10, 2025

The final, peer-reviewed published version of this preprint can be found here:

Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study

Will J, Gupta M, Zaretsky J, Dowlath A, Testa P, Feldman J

Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study

J Med Internet Res 2025;27:e69955

DOI: 10.2196/69955

PMID: 40465378

PMCID: 12177420

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Leveraging Large Language Models to Improve Readability of Online Patient Education Materials: Cross-sectional Study

  • John Will; 
  • Mahin Gupta; 
  • Jonah Zaretsky; 
  • Aliesha Dowlath; 
  • Paul Testa; 
  • Jonah Feldman

ABSTRACT

Background:

Online accessible patient education materials (PEMs) are essential for patient empowerment. However, studies have shown that these materials often exceed the recommended sixth grade reading level, making them difficult for many patients to understand. Large language models (LLMs) have the potential to transform PEMs into more readable educational content.

Objective:

We sought to evaluate whether three LLMs—ChatGPT, Gemini, and Claude—can optimize the readability of PEMs to the recommended reading level without compromising accuracy.

Methods:

This cross-sectional study used 60 randomly selected PEMs available online from three websites. We prompted LLMs to simplify the reading level of online PEMs. The primary outcome was the readability of the original online PEMs compared with the LLM-simplified versions. Readability scores were calculated using four validated indices Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), and Simple Measure of Gobbledygook Index (SMOGI). Accuracy and understandability were also assessed as balancing measures, with understandability measured using the Patient Education Materials Assessment Tool-Understandability (PEMAT-U).

Results:

The original readability scores for the American Heart Association (AHA), American Cancer Society (ACS), and American Stroke Association (ASA) websites were above the recommended sixth-grade level, with mean grade level scores of 10.7,10.0, and 9.6, respectively. After optimization by the LLMs, readability scores significantly improved across all three websites when compared to the original text. Compared to the original website, Wilcoxon Signed-Rank Test showed ChatGPT improved the readability to 7.6 from 10.1 (P<.001), Gemini improved to 6.6 (P<.001), and Claude improved to 5.6 (P<.001). Word counts were significantly reduced by all LLMs, with a decrease from a mean of 410.9 to 953.9 words to a mean range of 201.9 to 248.1 words. None of the ChatGPT LLM-simplified PEMs were inaccurate, while 3.3% of Gemini and Claude LLM-simplified PEMs were inaccurate. Baseline understandability scores, as measured by PEMAT-U, were preserved across all LLM-simplified versions.

Conclusions:

This cross-sectional study demonstrates that LLMs have the potential to significantly enhance the readability of online PEMs, making them more accessible to a broader audience. However, variability in model performance and demonstrated inaccuracies underscore the need for human review of LLM output. Further study is needed to identify the most reliable and effective LLM based approaches to improve readability.


 Citation

Please cite as:

Will J, Gupta M, Zaretsky J, Dowlath A, Testa P, Feldman J

Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study

J Med Internet Res 2025;27:e69955

DOI: 10.2196/69955

PMID: 40465378

PMCID: 12177420

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.