Currently accepted at: Journal of Medical Internet Research
Date Submitted: Oct 25, 2025
Open Peer Review Period: Oct 27, 2025 - Dec 22, 2025
Date Accepted: Feb 24, 2026
(closed for review but you can still tweet)
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/86502
The final accepted version (not copyedited yet) is in this tab.
Optimization of University Counseling Consent Forms with Large Language Models: A Multidimensional Comparative Evaluation
ABSTRACT
Background:
Mental health problems among university students are a growing global concern, yet limited resources and inadequate understanding of counseling procedures often delay support. Informed consent forms (ICFs) are vital for protecting rights and autonomy, but many are incomplete, ambiguous, or overly technical, and few institutions can effectively optimize them. Large language models (LLMs) offer scalable, low-cost solutions to enhance clarity and accessibility.
Objective:
This study aimed to evaluate whether LLM-based optimization could improve the structure, readability, content quality, and comprehensibility of university counseling ICFs, and to compare the performance of two advanced models—ChatGPT-5 and Grok-4.
Methods:
Counseling ICFs from 33 Chinese universities were collected and optimized using two advanced LLMs, ChatGPT-5 and Grok-4. A multidimensional framework assessed textual structure and readability, content quality from counselors’ perspectives, and comprehension from clients’ perspectives. Evaluations were conducted by mental health professionals and student volunteers. Wilcoxon signed-rank tests and linear mixed-effects models were applied for comparison and validation.
Results:
Compared with the originals, LLM-optimized ICFs demonstrated significant gains across all dimensions. The Lee–Yang readability index decreased from 28.68(5.69) to 22.39(2.13) with ChatGPT-5 and 24.37(2.32) with Grok-4 (both P<.001), while tone friendliness increased from 2.57(0.29) to 2.67(0.12) and 2.67(0.13), respectively. Expert-rated content quality improved from 45.33(8.74) to 52.54(7.92) and 55.49(7.81) (P<.001), primarily through enhanced specificity and existence of key items. Client comprehension scores rose from 19.02(1.32) to 22.33(0.81) and 22.05(0.90) (P<.001), reflecting higher clarity, readability, and acceptability. Linear mixed-effects models confirmed these findings.
Conclusions:
LLM-based rewriting markedly improved the clarity, completeness, and readability of counseling consent forms. By enhancing linguistic accessibility and professional precision, these models can support clearer communication and stronger counselor–client understanding. For universities with limited counseling resources, integrating LLM-assisted optimization may represent a practical step toward standardized, comprehensible, and client-centered counseling documentation. Clinical Trial: Not applicable.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.