Accepted for/Published in: JMIR Formative Research
Date Submitted: Sep 16, 2025
Date Accepted: Nov 18, 2025
Evaluating the Accuracy of AI-Model Generated Medical Information by ChatGPT and Gemini in Alignment with International Clinical Guidelines: A Comparison with Surviving Sepsis Campaign
ABSTRACT
Background:
The assessment of AI chatbots like ChatGPT and Google Gemini in providing medical information compared with international guidelines is a burgeoning area of research. These AI models are increasingly being considered for their potential to support clinical decision-making and patient education. However, their accuracy and reliability in delivering medical information that aligns with established guidelines remain under scrutiny.
Objective:
This study aims to assess the accuracy of medical information generated by ChatGPT and Gemini regarding their alignment with international guidelines for sepsis management.
Methods:
ChatGPT and Gemini were asked 18 questions (Supplementary Data S1, S2, and S3) according to the Surviving Sepsis Campaign International guideline, and the responses were evaluated by seven independent intensive care physicians. The responses generated were scored as follows: 3=correct, complete, and accurate, 2=correct but incomplete or inaccurate, and 1=incorrect. This scoring system was chosen to provide a clear and straightforward assessment of the accuracy and completeness of the responses. The Fleiss' kappa test was used to assess the agreements between the evaluators, and the Mann-Whitney U test was used to test for the significance between the correct responses generated by ChatGPT and Gemini.
Results:
The results showed that ChatGPT provided 5 (28%) perfect responses, 12 (67%) nearly perfect responses, and 1 (5%) low-quality response, with a substantial agreement among the evaluators (Fleiss' Kappa = 0.656). Gemini, on the other hand, provided 3 (17%) perfect responses, 14 (78%) nearly perfect responses, and 1 (5%) low-quality response, with a moderate agreement among the evaluators (Fleiss' Kappa = 0.582). The Mann-Whitney U test revealed no statistically significant difference between the two platforms (p-value = 0.4843).
Conclusions:
ChatGPT and Gemini are reliable tools for generating medical information. Despite their current limitations, both showed promise as complementary tools in patient education and clinical decision-making. The medical information generated from ChatGPT and Gemini still needs continuous evaluation regarding its accuracy, reliability, and alignment with international guidelines in different medical domains, particularly in the sepsis field.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.