Currently accepted at: JMIR Formative Research
Date Submitted: Nov 18, 2025
Open Peer Review Period: Nov 24, 2025 - Jan 19, 2026
Date Accepted: Mar 12, 2026
Date Submitted to PubMed: Mar 12, 2026
(closed for review but you can still tweet)
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/88008
The final accepted version (not copyedited yet) is in this tab.
An "ahead-of-print" version has been submitted to Pubmed, see PMID: 41817111
Evaluating Source-Based Large Language Models for Preclinical Dermatology Education: A Comparative Study
ABSTRACT
Background:
Large Language Models (LLMs) are Artificial Intelligences that predict desired user outputs. There are gaps in Dermatology education that could benefit from the incorporation of LLMs. However, efforts to do so have been hindered by concerns over the accuracy, transparency, and reproducibility of responses. Furthermore, LLMs have historically performed inconsistently on standardized medical questions, possibly due to a lack of representative data within an LLM’s armamentarium. NotebookLM (NLM) by Google, an LLM that advertises to develop answers from user-uploaded sources and provide reliable citations, is a source-based LLM that may offer a possible solution to these shortcomings. It also has the potential to integrate student-developed notes into teaching and thereby utilize principles from Vygotsky’s Zone of Proximal Development, along with Cognitive Load Theory, to enhance learning quality.
Objective:
To evaluate how the provision of extensive student-created study guides affects NotebookLM performance with implications for its usability within the classroom and to compare its performance to other industry LLMs in answering Step1 Dermatology questions.
Methods:
Four LLMs were used in experimentation - NLM with uploaded pre-clerkship study guides, NLM with an inputted blank sheet of paper, ChatGPT-4o mini, and Google Gemini 1.5 Flash. Each model completed three trials of 121 text-based USMLE Step 1 Dermatology questions from the AMBOSS question bank. They were evaluated for overall accuracy, accuracy by question difficulty, reproducibility of responses across trials, and agreement in answer selection between different models. Data on each of these categories was gathered, charted, and analyzed using Chi-Squared tests of Independence and Fleiss’s Kappa statistics.
Results:
NLM w/ Notes exhibited significantly more omissions (unanswered questions) than other LLMs (10.5% vs ≤1.65%). When omissions were excluded from statistical analysis, ChatGPT-4o Mini had the greatest accuracy (86%). NLM had unchanged accuracy when compared with inputted study guides and without (76% vs 76%); among all other LLMs, NLM with inputted material had the highest rates of reproducibility (Fleiss Kappa of 0.939).
Conclusions:
All the LLMs tested here performed higher than previously reported in literature, demonstrating a rapid progression in LLM capabilities. NLM has improved response completeness and reproducibility with user-inputted data, but not factual accuracy. One interpretation is that user-inputted data is being utilized more as an end-state 'filter' rather than being integrated within core reasoning processes, but the unclear nature of LLM cognition precludes definitive answers. More research is needed to harness source-based LLM potential within the classroom under structured, theory-informed educational roles.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.