Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Apr 15, 2025
Date Accepted: Jul 31, 2025
Fine-Tuning Methods for Large Language Models in Clinical Medicine: Comparative Evaluation of Supervised Fine-Tuning and Direct Preference Optimization
ABSTRACT
Background:
Large language model (LLM) fine tuning is the process of adjusting out-of-the-box model weights using a dataset of interest. Fine tuning can be a powerful technique to improve model performance in fields like medicine, where data access is restricted and LLMs may have poor out-of-the-box performance.
Objective:
In this study we investigated the benefits of fine tuning with supervised fine tuning (SFT) and direct preference optimization (DPO) across a range of LLM applications for medicine
Methods:
We use Llama3 7B and Mistral 7B v2 to compare the performance of SFT and DPO across four datasets for common natural language tasks in medicine. The tasks evaluated were simple classification, clinical reasoning, summarization, and clinical triage.
Results:
Clinical Reasoning accuracy increased 8% and 7% with DPO over SFT for Llama3 (p value 0.003) and Mistral2 (p value 0.004) respectively. Summarization quality, graded on a five point Likert scale, increased 0.13 and 0.10 for Llama3 and Mistral2 (p values <0.001). Triage F1 scores increased 0.16 and 0.14 for Llama3 and Mistral2 (p values <0.001) for personnel triage, and 0.12 and -0.02 for Llama3 (p value <0.001) and Mistral2 (p value 1.0) for urgency triage. Classification with text data showed significant increases in F1 score for SFT over the base model, 0.35 and 0.24 for Llama3 and Mistral2 (p values <0.001), but did not demonstrate an increase with DPO over SFT.
Conclusions:
DPO significantly improves LLM performance for complex medical tasks such as clinical reasoning, summarization and clinical triage, while SFT alone was sufficient for simple classification. Our results establish the role and importance of DPO fine tuning for medical applications of LLMs and call attention to current software gaps that prevent the widespread deployment of this technique.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.