Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Dec 23, 2025
Open Peer Review Period: Dec 24, 2025 - Feb 18, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Using GPT-4 to Automate the Generation of Lay Summaries for Cancer Publications: Human-centric Quantitative and Qualitative Evaluation

  • Emma Purdie; 
  • Tony Yu; 
  • Jochen Weile; 
  • Diana Lemaire; 
  • Mélanie Courtot

ABSTRACT

Background:

Cancer research literature is often riddled with technical jargon that is not digestible to the average person. Individuals interested in research studies may want to contribute through patient partner engagement or sample donation but find the relevant literature overwhelming. Through the generation of lay summaries, previously inaccessible research papers become easier to comprehend, especially for patient partners or data donors. With large language models (LLMs) continuing to advance, so does their capability to summarize large texts.

Objective:

In this study, we examined whether LLMs can produce lay summaries of scientific literature at-scale, while maintaining readability and accuracy to their source texts.

Methods:

We developed a tool to generate lay summaries of open-access article abstracts and their full texts with GPT-4-Turbo. Prompt development aimed for a target 8th grade reading level assessed with Flesch-Kincaid Grade Level. Human-review metrics were used to evaluate readability and accuracy when generated using abstracts versus full text articles.

Results:

The average Flesch-Kincaid Grade Level Score was 7.13 for abstract-based summaries and 7.39 for full text-based summaries, indicating summaries at around 7th grade reading level. Human-review metrics showed these summaries were of similar readability and accuracy when generated using abstracts versus full text articles, with mean accuracy scores from human review of 7.09 vs 7.42 out of 10 respectively. Additionally, qualitative patient-based assessment indicated these summaries would encourage participation in research studies.

Conclusions:

By generating lay summaries for complex and lengthy research papers, their scientific information becomes accessible to a larger audience, including patient partners interested in contributing to cancer research. Summaries that are easy to understand will allow participants to make informed decisions about their involvement and appreciate the impact of their contributions if and when their results are published.


 Citation

Please cite as:

Purdie E, Yu T, Weile J, Lemaire D, Courtot M

Using GPT-4 to Automate the Generation of Lay Summaries for Cancer Publications: Human-centric Quantitative and Qualitative Evaluation

JMIR Preprints. 23/12/2025:89995

DOI: 10.2196/preprints.89995

URL: https://preprints.jmir.org/preprint/89995

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.