Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Feb 3, 2025
Open Peer Review Period: Feb 3, 2025 - Mar 31, 2025
Date Accepted: Aug 8, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Performance of a Small Language Model Versus a Large Language Model in Answering Glaucoma Frequently Asked Patient Questions: Development and Usability Study

Faneli AC, Scherer R, Muralidhar R, Guerreiro M, Beniz L, Vilas Boas V, Costa D, Jammal AA, Medeiros FA

Performance of a Small Language Model Versus a Large Language Model in Answering Glaucoma Frequently Asked Patient Questions: Development and Usability Study

JMIR AI 2026;5:e72101

DOI: 10.2196/72101

PMID: 41493946

PMCID: 12772937

Performance of a Small Language Model Versus a Large Language Model in Answering Glaucoma Frequently Asked Patient Questions: Development and Usability Study

  • Adriano Cypriano Faneli; 
  • Rafael Scherer; 
  • Rohit Muralidhar; 
  • Marcus Guerreiro; 
  • Luiz Beniz; 
  • Veronica Vilas Boas; 
  • Douglas Costa; 
  • Alessandro A. Jammal; 
  • Felipe A. Medeiros

ABSTRACT

Background:

Large language models (LLM) have been shown to answer patient questions in ophthalmology similar to human experts. However, concerns remain regarding their use, particularly related to patient privacy and potential inaccuracies that could compromise patient safety. This study aimed to compare the performance of an LLM in answering frequently asked patient questions about glaucoma with that of a small language model (SLM) trained locally on ophthalmology-specific literature.

Objective:

This study assessed the efficacy of SLM enhanced with RAG technology compared to ChatGPT 4.0 for answering common patient inquiries regarding glaucoma. Glaucoma specialists evaluated the quality of the answers, and the level of readability was assessed using standardized methods.

Methods:

We compiled thirty-five frequently asked questions on glaucoma, categorized into six domains: pathogenesis, risk factors, clinical manifestations, diagnosis, treatment and prevention, and prognosis. Each question was posed to both a small language model (SLM) using a Retrieval-Augmented Generation (RAG) framework, trained on ophthalmology-specific literature, and to a large language model (LLM; ChatGPT 4.0, OpenAI).

Results:

The answers from the SLM demonstrated comparable quality with ChatGPT 4.0, scoring 7.9 ± 1.2 and 7.4 ± 1.5, respectively, out of a total of 9 points, respectively (p= 0.13). The accuracy rating was consistent overall and across all six glaucoma care domains. Both models provided answers considered unsuitable for healthcare-related information, as they were difficult for the average layperson to read

Conclusions:

Both models generated accurate content, but the answers were considered challenging for the average layperson to understand, making them unsuitable for healthcare-related information. Given the specialized SLM’s comparable performance to the LLM, its high customization potential, lower cost, and ability to operate locally, it presents a viable option for deploying natural language processing in real-world ophthalmology clinical settings.


 Citation

Please cite as:

Faneli AC, Scherer R, Muralidhar R, Guerreiro M, Beniz L, Vilas Boas V, Costa D, Jammal AA, Medeiros FA

Performance of a Small Language Model Versus a Large Language Model in Answering Glaucoma Frequently Asked Patient Questions: Development and Usability Study

JMIR AI 2026;5:e72101

DOI: 10.2196/72101

PMID: 41493946

PMCID: 12772937

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.