Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Jun 16, 2025
Date Accepted: Dec 22, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Integrating Large Language Models Into Trauma Education for Medical Students: Randomized Controlled Pilot Trial

Pakkasjärvi N, Gustafsson J, Lehtonen-Smeds E

Integrating Large Language Models Into Trauma Education for Medical Students: Randomized Controlled Pilot Trial

JMIR Med Educ 2026;12:e79134

DOI: 10.2196/79134

PMID: 41843765

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large Language Models in Trauma Education: A Randomized Controlled Pilot Trial on Decision-Making and Teamwork

  • Niklas Pakkasjärvi; 
  • Joona Gustafsson; 
  • Erno Lehtonen-Smeds

ABSTRACT

Background:

The exponential growth of medical knowledge presents a paradox for modern medical education. While access to information is immediate, applying it in a clinically meaningful way remains a challenge. Large language models (LLMs), such as ChatGPT, are widely used for information retrieval, yet their role in dynamic, high-pressure clinical learning remains poorly understood.

Objective:

To evaluate whether access to a LLM improves decision-making, teamwork, and confidence in trauma education for medical students.

Methods:

This randomized controlled pilot study involved 40 final-year medical students participating in a trauma simulation session. Students self-selected into teams of 4–6 and were randomized to either an LLM-assisted group (ChatGPT-4o mini) or a control group without LLM access. All teams completed 18 video-based trauma scenarios requiring time-sensitive clinical decisions. Prompting was unrestricted. Confidence and trauma exposure were assessed using pre/post questionnaires. Facilitators rated teamwork (1–5), decision accuracy, and response times. Knowledge retention was measured four weeks later via an online quiz.

Results:

Confidence in trauma management improved in both groups (p < .001), with larger gains in the non-LLM group (p = .02). LLM support did not enhance decision accuracy or speed and was associated with longer response times in some complex cases. Teams without LLMs demonstrated more active discussion and scored higher in teamwork ratings (median 5.0 vs. 3.5; p = .033). Students primarily used the LLM for fact-checking but reported vague or overly general responses. Knowledge retention was high across both groups and did not differ significantly (p = .332).

Conclusions:

While students appreciated the inclusion of AI, unstructured LLM use did not improve performance and may have disrupted group reasoning. This pilot study highlights the need for structured AI integration and targeted instruction in AI literacy. Simulation-based trauma education proved effective and well received, but optimizing the educational value of LLMs will require thoughtful curricular design. Further studies with more students are needed to define best practices for LLM use in clinical education. Clinical Trial: https://doi.org/10.17605/OSF.IO/7HF3V


 Citation

Please cite as:

Pakkasjärvi N, Gustafsson J, Lehtonen-Smeds E

Integrating Large Language Models Into Trauma Education for Medical Students: Randomized Controlled Pilot Trial

JMIR Med Educ 2026;12:e79134

DOI: 10.2196/79134

PMID: 41843765

Per the author's request the PDF is not available.