Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Dec 21, 2023
Open Peer Review Period: Dec 29, 2023 - Feb 23, 2024
Date Accepted: Oct 26, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Impact of Clinical Decision Support Systems on Medical Students’ Case-Solving Performance: Comparison Study with a Focus Group

Montagna M, Chiabrando F, De Lorenzo R, Rovere Querini P, Medical Students

Impact of Clinical Decision Support Systems on Medical Students’ Case-Solving Performance: Comparison Study with a Focus Group

JMIR Med Educ 2025;11:e55709

DOI: 10.2196/55709

PMID: 40101183

PMCID: 11936302

Clinical decision support systems during teaching: a hands-on comparison

  • Marco Montagna; 
  • Filippo Chiabrando; 
  • Rebecca De Lorenzo; 
  • Patrizia Rovere Querini; 
  • Medical Students

ABSTRACT

Background:

Healthcare practitioners use Clinical Decision Support Systems (CDSS) as an aid in the crucial task of clinical reasoning and decision making. Traditional CDSS are Online Repositories (OR) and Clinical Practice Guidelines (CPG). Recently, Large Language Models (LLMs) like ChatGPT have emerged as potential alternatives. They have proven to be powerful innovative tools, yet they are not devoid of worrisome risks.

Objective:

This study aims to explore how medical students perform in an evaluated clinical case through the use of different CDSS tools.

Methods:

The authors randomly divided medical students into three groups, CPG, N = 6 ( 38%); OR, N = 5 (31%); ChatGPT, N = 5 (31%) and assigned each group a different type of CDSS for guidance in answering prespecified questions, assessing how students’ speed and ability at resolving the same clinical case varied accordingly. External reviewers evaluated all answers based on accuracy and completeness metrics (Score: 1-5). The authors analyzed and categorized group scores according to the skill investigated: Differential Diagnosis, Diagnostic Workup, and Clinical Decision Making.

Results:

Answering time showed a trend for the ChatGPT group to be the fastest. The mean scores for completeness were: CPG 4.0, OR 3.7, ChatGPT 3.8 (p = 0.49). The mean scores for accuracy were: CPG 4.0, OR 3.3, ChatGPT 3.7 (p = 0.02). Aggregating scores according to the three students’ skill domains, trends in differences among the groups emerge more clearly, with the CPG group that performed best in nearly all domains and maintained almost perfect alignment between its completeness and accuracy.

Conclusions:

This hands-on session provided valuable insights into the potential perks and associated pitfalls of LLMs in medical education and practice. It suggested the critical need to include teachings in Medical Degree Courses on how to properly take advantage of LLMs, as the potential for misuse is evident and real.


 Citation

Please cite as:

Montagna M, Chiabrando F, De Lorenzo R, Rovere Querini P, Medical Students

Impact of Clinical Decision Support Systems on Medical Students’ Case-Solving Performance: Comparison Study with a Focus Group

JMIR Med Educ 2025;11:e55709

DOI: 10.2196/55709

PMID: 40101183

PMCID: 11936302

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.