Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jan 11, 2025
Date Accepted: Nov 30, 2025
Evaluating Multiple Input Strategies of Large Language Models for Gallbladder Polyps on Ultrasound: A Comparative Study
ABSTRACT
Background:
Large language models (LLMs) can already read and understand images, such as ChatGPT-4o and Claude 3.5 Sonnet. This makes it possible for LLMs to analyze and diagnose medical images.
Objective:
To analyze the feasibility of LLMs for differentiating adenomatous and non-neoplastic gallbladder polyps (≥ 1.0 cm) based on ChatGPT-4o and Claude 3.5 Sonnet, compared to radiologists and the joint guidelines between the ESGAR, EAES, EFISDS and ESGE.
Methods:
Ultrasound images and reports of gallbladder polyps ≥ 1.0 cm with pathology were retrospectively collected from a hospital from January 2011 to January 2022. LLM performance was evaluated using 3 input strategies: direct image analysis (strategy LLMs – Image), feature-based text analysis (strategy LLMs – Text), and a scoring model-based text analysis (strategy LLMs – Model). The intra- and inter-reader agreement and diagnostic performance of LLMs were evaluated in strategy LLMs – Image, strategy LLMs – Text, and strategy LLMs – Model. The diagnostic performance, including sensitivity, specificity, accuracy, area under the receiver operating characteristic curve (AUC), and unnecessary resection rate of non-neoplastic polyps (UNRR) of LLMs in three strategies was compared with the guideline, and that in strategy LLMs – Model was compared with radiologists.
Results:
The study included 223 patients (aged 18-72 years, 59.2% female), with 48 adenomatous polyps and 175 non-neoplastic polyps. The intra- and inter-reader agreement of the three diagnostic strategies were both ranked as LLMs – Model > LLMs – Text > LLMs – Image. The sensitivity of strategy LLMs – Image and LLMs – Text was significantly lower than that of guideline (all P < .001). In strategy Readers/LLMs–Model, the accuracy of ChatGPT-4o, Claude 3.5 Sonnet and radiologists were significantly higher than that of the guideline (0.35, 0.34 and 0.34 vs 0.22, all P < .01), and the UNRR was significantly lower (82%, 83% and 83% vs 100%, all P < .01), while the sensitivity were comparable with the guideline (0.94, 0.98 and 0.98 vs 1.00, all P > .05). All the indicators of diagnostic performance of the GPT – Model and Claude – Model were not significantly different from those of radiologists (all P > .05).
Conclusions:
The ability of LLMs to recognize and interpret medical images needs to be further improved. Text strategy based on scoring system is currently the most appropriate diagnostic strategy for LLMs. Clinical Trial: NA
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.