Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Nov 17, 2024
Open Peer Review Period: Nov 18, 2024 - Jan 13, 2025
Date Accepted: Sep 22, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

AI’s Accuracy in Extracting Learning Experiences From Clinical Practice Logs: Observational Study

Kondo T, Nishigori H

AI’s Accuracy in Extracting Learning Experiences From Clinical Practice Logs: Observational Study

JMIR Med Educ 2025;11:e68697

DOI: 10.2196/68697

PMID: 41092407

PMCID: 12529426

AI’s Accuracy in Extracting Learning Experiences from Clinical Practice Logs: An Observational Study

  • Takeshi Kondo; 
  • Hiroshi Nishigori

ABSTRACT

Background:

Improving the quality of education in clinical settings requires an understanding of learners’ experiences and learning processes. However, this is a significant burden on learners and educators. If learners’ learning records could be automatically analyzed and experiences visualized, it would enable real-time tracking of their progress. Large language models (LLMs) may be useful for this purpose, although their accuracy has not been sufficiently studied.

Objective:

This study aimed to explore the accuracy of predicting the actual clinical experiences of medical students from their learning log data during clinical clerkship using LLMs.

Methods:

This study was conducted at the Nagoya University School of Medicine. Learning log data from medical students participating in a clinical clerkship from April 22, 2024, to May 24, 2024, were used. The Model Core Curriculum (MCC) for Medical Education was employed as a template to extract experiences. OpenAI’s ChatGPT was selected for this task after a comparison with other LLMs. Prompts were created using the learning log data and provided to ChatGPT to extract experiences, which were then listed. A web application using GPT-4-turbo was developed to automate this process. The accuracy of the extracted experiences was evaluated by comparing them with the corrected lists provided by the students.

Results:

Twenty out of thirty-three 6th-year medical students participated in this study, resulting in 40 datasets. The Jaccard Index was 0.59, indicating moderate agreement. Sensitivity and specificity were 62.67% and 99.37%, respectively. The results suggest that GPT-4-turbo accurately identifies many of the actual experiences but misses some because of insufficient detail or a lack of student records.

Conclusions:

This study demonstrated that LLMs, such as GPT-4-turbo, can predict clinical experiences from learning logs with high specificity but moderate sensitivity. Future improvements in AI models and the combination of learning logs with other data sources, such as electronic medical records, may enhance the accuracy. Utilizing naturally accumulated data for assessment could reduce the burden on learners and educators while improving the quality of educational assessments in medical education.


 Citation

Please cite as:

Kondo T, Nishigori H

AI’s Accuracy in Extracting Learning Experiences From Clinical Practice Logs: Observational Study

JMIR Med Educ 2025;11:e68697

DOI: 10.2196/68697

PMID: 41092407

PMCID: 12529426

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.