Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Formative Research

Date Submitted: Nov 18, 2025
Open Peer Review Period: Nov 24, 2025 - Jan 19, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Evaluating Source-Based Large Language Models for Preclinical Dermatology Education: A Comparative Study

  • Frank Je-Min Lin; 
  • Sunghun Cho

ABSTRACT

Background:

Large Language Models (LLMs) are Artificial Intelligences that predict desired user outputs. There are gaps in Dermatology education that could benefit from the incorporation of LLMs. However, efforts to do so have been hindered by concerns over the accuracy, transparency, and reproducibility of responses. Furthermore, LLMs have historically performed inconsistently on standardized medical questions, possibly due to a lack of representative data within an LLM’s armamentarium. NotebookLM (NLM) by Google, an LLM that advertises to develop answers from user-uploaded sources and provide reliable citations, is a source-based LLM that may offer a possible solution to these shortcomings. It also has the potential to integrate student-developed notes into teaching and thereby utilize principles from Vygotsky’s Zone of Proximal Development, along with Cognitive Load Theory, to enhance learning quality.

Objective:

To evaluate how the provision of extensive student-created study guides affects NotebookLM performance with implications for its usability within the classroom and to compare its performance to other industry LLMs in answering Step1 Dermatology questions.

Methods:

Four LLMs were used in experimentation - NLM with uploaded pre-clerkship study guides, NLM with an inputted blank sheet of paper, ChatGPT-4o mini, and Google Gemini 1.5 Flash. Each model completed three trials of 121 text-based USMLE Step 1 Dermatology questions from the AMBOSS question bank. They were evaluated for overall accuracy, accuracy by question difficulty, reproducibility of responses across trials, and agreement in answer selection between different models. Data on each of these categories was gathered, charted, and analyzed using Chi-Squared tests of Independence and Fleiss’s Kappa statistics.

Results:

NLM w/ Notes exhibited significantly more omissions (unanswered questions) than other LLMs (10.5% vs ≤1.65%). When omissions were excluded from statistical analysis, ChatGPT-4o Mini had the greatest accuracy (86%). NLM had unchanged accuracy when compared with inputted study guides and without (76% vs 76%); among all other LLMs, NLM with inputted material had the highest rates of reproducibility (Fleiss Kappa of 0.939).

Conclusions:

All the LLMs tested here performed higher than previously reported in literature, demonstrating a rapid progression in LLM capabilities. NLM has improved response completeness and reproducibility with user-inputted data, but not factual accuracy. One interpretation is that user-inputted data is being utilized more as an end-state 'filter' rather than being integrated within core reasoning processes, but the unclear nature of LLM cognition precludes definitive answers. More research is needed to harness source-based LLM potential within the classroom under structured, theory-informed educational roles.


 Citation

Please cite as:

Lin FJM, Cho S

Evaluating Source-Based Large Language Models for Preclinical Dermatology Education: A Comparative Study

JMIR Preprints. 18/11/2025:88008

DOI: 10.2196/preprints.88008

URL: https://preprints.jmir.org/preprint/88008

The author of this paper has made a PDF available, but requires the user to login, or create an account.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.