Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Jan 15, 2024
Date Accepted: May 6, 2024

The final, peer-reviewed published version of this preprint can be found here:

Assessing the Ability of a Large Language Model to Score Free-Text Medical Student Clinical Notes: Quantitative Study

Burke HB, Hoang A, Lopreiato JO, King H, Hemmer P, Montogmery M, Gagarin V

Assessing the Ability of a Large Language Model to Score Free-Text Medical Student Clinical Notes: Quantitative Study

JMIR Med Educ 2024;10:e56342

DOI: 10.2196/56342

PMID: 39118469

PMCID: 11327632

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Assessing the ability of a large language model to score free text medical student notes

  • Harry B Burke; 
  • Albert Hoang; 
  • Joseph O Lopreiato; 
  • Heidi King; 
  • Paul Hemmer; 
  • Michael Montogmery; 
  • Viktoria Gagarin

ABSTRACT

Background:

Teaching medical students the skills required to acquire, interpret, apply, and communicate clinical information is an integral part of medical education. A crucial aspect of this process involves providing students with feedback regarding the quality of their free-text clinical notes.

Objective:

The objective of this project is to assess the ability of ChatGPT 3.5 (ChatGPT) to score medical students’ free text history and physical notes.

Methods:

This is a single institution, retrospective study. Standardized patients learned a prespecified clinical case and, acting as the patient, interacted with medical students. Each student wrote a free text history and physical note of their interaction. ChatGPT is a large language model (LLM). The students’ notes were scored independently by the standardized patients and ChatGPT using a prespecified scoring rubric that consisted of 85 case elements. The measure of accuracy was percent correct.

Results:

The study population consisted of 168 first year medical students. There was a total of 14,280 scores. The standardized patient incorrect scoring rate (error) was 7.2% and the ChatGPT incorrect scoring rate was 1.0%. The ChatGPT error rate was 86% lower than the standardized patient error rate. The standardized patient mean incorrect scoring rate of 85 (SD 74) was significantly higher than the ChatGPT mean incorrect scoring rate of 12 (SD 11), p = 0.002.

Conclusions:

ChatGPT had a significantly lower error rate than the standardized patients. This suggests that an LLM can be used to score medical students’ notes. Furthermore, it is expected that, in the near future, LLM programs will provide real time feedback to practicing physicians regarding their free text notes. Generative pretrained transformer artificial intelligence programs represent an important advance in medical education and in the practice of medicine.


 Citation

Please cite as:

Burke HB, Hoang A, Lopreiato JO, King H, Hemmer P, Montogmery M, Gagarin V

Assessing the Ability of a Large Language Model to Score Free-Text Medical Student Clinical Notes: Quantitative Study

JMIR Med Educ 2024;10:e56342

DOI: 10.2196/56342

PMID: 39118469

PMCID: 11327632

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.