Currently submitted to: Journal of Medical Internet Research
Date Submitted: Dec 8, 2025
Open Peer Review Period: Dec 8, 2025 - Feb 2, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Simulating the Patient's Perspective: Promise and Pitfalls of LLMs in Patient-Centric Communication
ABSTRACT
Background:
Large Language Models (LLMs) have shown broad applicability in medicine, including the generation of clinical documents. Beyond content creation, LLMs can also be used to evaluate the quality of medical documents. Because of LLMs' ability to simulate (or impersonate) specific personas, they can offer diverse perspectives (such as those of healthcare professionals versus patients with lower health literacy) on the clarity of medical texts.
Objective:
The primary objective of this research was to evaluate the ability of LLMs to simulate diverse user personas, varying by demographic profiles including educational background, gender, visit frequency, for the task of interpreting ICU discharge summaries. The study aimed to benchmark the clarity assessments generated by these LLM personas against a baseline established by human participants with corresponding backgrounds, in order to highlight the potential and limitations of using current LLMs to create personalized health information.
Methods:
We evaluated the ability of LLMs to simulate diverse user personas for the task of interpreting ICU discharge summaries. LLMs were prompted to adopt personas with varied demographic profiles, including different educational backgrounds. The resulting LLM-generated assessments of the summaries’ clarity were then benchmarked against a baseline established by human participants with corresponding backgrounds.
Results:
LLMs demonstrated a strong ability to simulate personas based on educational attainment, accurately interpreting key medical information in 88% of cases. However, the models’ performance varied widely when other demographic variables were introduced. For instance, persona performance was highly erratic based on gender, with simulated male personas achieving 97% accuracy while female personas achieved only 44%. The inclusion of additional details, such as the frequency of prior emergency room visits, further degraded the models' performance.
Conclusions:
This research highlights both the potential and the significant limitations of using LLMs to create personalized health information. While LLMs are promising for simulating user perspectives based on education, the current models exhibit unpredictable performance when tasked with incorporating other fundamental demographic traits like gender.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.