Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Human Factors

Date Submitted: Dec 23, 2025
Open Peer Review Period: Dec 29, 2025 - Feb 23, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Simplifying Digital Therapeutics Explanations Using Large Language Models: Randomized Online Experiments on Readability and Perceived Comprehension

  • JunYoung Seo; 
  • Moses Yook; 
  • Dai Jin Kim; 
  • JunHee Lee; 
  • Jae Hyun Yoo; 
  • GiHwan Byeon; 
  • In Young Choi

ABSTRACT

Background:

Digital therapeutics (DTx) are evidence-based software interventions with the potential to treat health conditions. However, uptake remains limited by low public awareness and overly complex patient education materials that exceed recommended readability levels. Large language models (LLMs) may simplify such content; however, their effect on actual comprehension has not been empirically demonstrated.

Objective:

To examine whether LLM-based simplification of DTx explanatory materials enhances public comprehension and subjective evaluations of readability, clarity, and comprehensibility compared with manufacturer-provided documents.

Methods:

We developed a simplification tool using the GPT-4o API, configured for deterministic outputs and guided by structured readability instructions. Original DTx explanatory materials about insomnia and nicotine dependence were obtained from manufacturers and transformed into simplified versions. Two randomized, between-subject online experiments were conducted (N = 1,000; 500 per condition). Participants were stratified by age and sex and screened for relevance (Insomnia Severity Index ≥8 for the insomnia experiment; smoking ≥5 cigarettes/day for the nicotine dependence experiment). Within each experiment, participants were randomly assigned to review either the original or the LLM-simplified explanation. Perceived understanding and post-exposure evaluations of ease, clarity, and comprehensibility were assessed pre- and post-exposure.

Results:

Repeated-measures analysis of variance revealed significant Group × Time interaction effects on perceived understanding in both experiments: insomnia (F₁,₄₉₈ = 24.8; P <.001) and nicotine dependence (F₁,₄₉₈ = 14.1; P < .001), with greater improvements in the LLM-simplified groups. Mann–Whitney U tests further showed that LLM-simplified explanations were rated as significantly easier, clearer, and more comprehensible than the original versions in both experiments (all P < .05).

Conclusions:

Compared with manufacturer-provided original materials, LLM-simplified DTx explanations led to greater improvements in perceived understanding and subjective readability among lay audiences, even after a single exposure. This finding highlights the scalability of LLM-based simplification as a strategy to address health literacy barriers. Integrating such tools into patient education may enhance access to digital therapeutic information and support broader societal diffusion.


 Citation

Please cite as:

Seo J, Yook M, Kim DJ, Lee J, Yoo JH, Byeon G, Choi IY

Simplifying Digital Therapeutics Explanations Using Large Language Models: Randomized Online Experiments on Readability and Perceived Comprehension

JMIR Preprints. 23/12/2025:89451

DOI: 10.2196/preprints.89451

URL: https://preprints.jmir.org/preprint/89451

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.