Accepted for/Published in: JMIR AI
Date Submitted: Feb 3, 2025
Open Peer Review Period: Feb 3, 2025 - Mar 31, 2025
Date Accepted: Aug 8, 2025
(closed for review but you can still tweet)
Performance of a Small Language Model Versus a Large Language Model in Answering Glaucoma Frequently Asked Patient Questions: Development and Usability Study
ABSTRACT
Background:
Large language models (LLM) have been shown to answer patient questions in ophthalmology similar to human experts. However, concerns remain regarding their use, particularly related to patient privacy and potential inaccuracies that could compromise patient safety. This study aimed to compare the performance of an LLM in answering frequently asked patient questions about glaucoma with that of a small language model (SLM) trained locally on ophthalmology-specific literature.
Objective:
This study assessed the efficacy of SLM enhanced with RAG technology compared to ChatGPT 4.0 for answering common patient inquiries regarding glaucoma. Glaucoma specialists evaluated the quality of the answers, and the level of readability was assessed using standardized methods.
Methods:
We compiled thirty-five frequently asked questions on glaucoma, categorized into six domains: pathogenesis, risk factors, clinical manifestations, diagnosis, treatment and prevention, and prognosis. Each question was posed to both a small language model (SLM) using a Retrieval-Augmented Generation (RAG) framework, trained on ophthalmology-specific literature, and to a large language model (LLM; ChatGPT 4.0, OpenAI).
Results:
The answers from the SLM demonstrated comparable quality with ChatGPT 4.0, scoring 7.9 ± 1.2 and 7.4 ± 1.5, respectively, out of a total of 9 points, respectively (p= 0.13). The accuracy rating was consistent overall and across all six glaucoma care domains. Both models provided answers considered unsuitable for healthcare-related information, as they were difficult for the average layperson to read
Conclusions:
Both models generated accurate content, but the answers were considered challenging for the average layperson to understand, making them unsuitable for healthcare-related information. Given the specialized SLM’s comparable performance to the LLM, its high customization potential, lower cost, and ability to operate locally, it presents a viable option for deploying natural language processing in real-world ophthalmology clinical settings.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.