Currently submitted to: JMIR Formative Research
Date Submitted: Jan 1, 2026
Open Peer Review Period: Jan 1, 2026 - Feb 26, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Comparative Evaluation of Large Language Models and Human Specialists in Acid-Base Disorder Interpretation and Sepsis Management: A Prospective Simulation Study
ABSTRACT
Background:
Large language models (LLMs) such as ChatGPT and Google Gemini have demonstrated promising capabilities in medical reasoning and clinical decision support. However, their comparative performance against human specialists in critical care scenarios, particularly acid-base disorder interpretation and sepsis management, remains inadequately characterized.
Objective:
This study aimed to compare the diagnostic and therapeutic decision-making performance of advanced AI models (ChatGPT-4 and Google Gemini), a consensus-based ensemble AI approach, and human medical specialists in acid-base disorder interpretation and sepsis management scenarios using validated clinical vignettes.
Methods:
A total of 45 clinical case vignettes (20 acid-base disorder cases and 25 sepsis management cases) were developed by an expert panel. Cases were independently evaluated by 20 human specialists (10 emergency medicine physicians and 10 anesthesiologists), ChatGPT-4, Google Gemini, and a simple majority-voting ensemble model. Blinded evaluation was ensured throughout. Performance metrics included diagnostic accuracy, treatment recommendation appropriateness, and Surviving Sepsis Campaign (SSC) bundle compliance rates.
Results:
For acid-base disorder interpretation, the ensemble AI model achieved the highest overall accuracy (86.0%), followed by anesthesiologists (84.5%), ChatGPT-4 (83.7%), emergency physicians (83.2%), and Google Gemini (79.5%). In simple metabolic and respiratory disorders, AI models demonstrated comparable or superior performance to human experts (>90% accuracy). However, human specialists outperformed individual AI models in mixed acid-base disorders (humans: 75.5% vs ChatGPT: 68.5%, Gemini: 65.3%, P<.05). For sepsis management, SSC hour-1 bundle compliance was highest in the ensemble model (95.8%), followed by ChatGPT-4 (94.2%), human experts (91.5%), and Gemini (89.7%).
Conclusions:
Advanced LLMs demonstrate comparable performance to human specialists in straightforward acid-base and sepsis scenarios, with ensemble approaches showing potential for improved accuracy. However, human expertise remains superior in complex, atypical presentations requiring nuanced clinical judgment. These findings are limited to text-based simulations and require validation in real-world clinical environments.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.