Accepted for/Published in: JMIR AI
Date Submitted: Jun 16, 2025
Open Peer Review Period: Jul 3, 2025 - Aug 28, 2025
Date Accepted: Oct 31, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Evaluating Large Language Models for Axial Spondyloarthritis Patient Education: A Delphi-Based Quality Assessment
ABSTRACT
Background:
Axial spondyloarthritis (axSpA), a chronic autoinflammatory disease characterized by heterogeneous clinical manifestations, presents significant challenges in long-term patient self-management. Despite growing applications of large language models (LLMs) in healthcare, their efficacy in providing axSpA-specific health advice remains unassessed.
Objective:
To construct a patient-oriented needs assessment tool and conduct a systematic evaluation of LLM-generated health advice quality for axSpA patients.
Methods:
A three-round Delphi consensus process was employed to develop the questionnaire, which were subsequently distributed to 84 axSpA patients and 26 rheumatologists. Patient-identified concerns were processed through five LLM platforms (ChatGPT-4, DeepSeek R1, Hunyuan T1, Kimi k1.5, Wenxin X1). Responses were assessed using guideline-based accuracy scoring and AlphaReadabilityChinese analysis tools.
Results:
The validated questionnaire revealed age-related differences in priorities: younger patients expressed significantly greater concern than those over 40 regarding AS symptom management and medication side effects. Divergent priorities between clinicians and patients were observed regarding diagnostic mimics and drug mechanisms. LLM performance varied by domain—accuracy peaked in Diagnosis/Examination (avg. 20.4/25) but dipped in Treatment/Medication (19.3). ChatGPT-4 and Kimi demonstrated superior performance in readability, safety remained high overall (disclaimer rates: ChatGPT-4/DeepSeek-R1 100%, Kimi 88%).
Conclusions:
The observed age-stratified needs and clinician-patient communication gaps highlight the necessity for tailored patient education programs. LLMs demonstrated robust performance across evaluation metrics, particularly ChatGPT-4 which achieved 94% overall compliance with clinical guidelines. These AI tools show potential as scalable adjuncts for ongoing axSpA patient support, though human oversight remains crucial for complex clinical decisions.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.