Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Apr 17, 2024
Date Accepted: Jun 2, 2024
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Feasibility of Non-proprietary Large Language Models for Medical Documentation: A Study in German Healthcare Context
ABSTRACT
Background:
The use of Large Language Models (LLMs) as writing assistance for medical professionals is a promising approach to reduce the time required for documentation, but there may be practical, ethical, and legal challenges in many jurisdictions complicating the use of the most powerful commercial LLM solutions.
Objective:
In this study, we assess the feasibility of using non-proprietary LLMs of the Generative Pretrained Transformer (GPT) variety as writing assistance for medical professionals in an on-premise setting with restricted compute resources, generating German medical text.
Methods:
We train four 7B parameter model variants for our task and evaluate their performance using a powerful commercial LLM, namely Anthropic’s Claude-v2 as a rater. Based on this, we select the best performing model and evaluate its practical usability with two independent human raters on real world data.
Results:
In the automated evaluation with Claude-v2 BLOOM-CLP-German, a model trained from scratch on German text, achieved the best results. In the manual evaluation by human experts, 95 of the 102 reports generated by that model were evaluated as usable as is or with only minor changes by both human raters (93.1%).
Conclusions:
The results show that even with restricted compute resources it is possible to generate medical texts that are suitable for documentation in routine clinical practice, but that language issues need to be considered when processing non-English text.
Citation