Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Apr 17, 2024
Date Accepted: Jun 2, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Viability of Open Large Language Models for Clinical Documentation in German Health Care: Real-World Model Evaluation Study

Heilmeyer F, Böhringer D, Reinhard T, Arens S, Lyssenko L, Haverkamp C

Viability of Open Large Language Models for Clinical Documentation in German Health Care: Real-World Model Evaluation Study

JMIR Med Inform 2024;12:e59617

DOI: 10.2196/59617

PMID: 39195570

PMCID: 11373371

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Feasibility of Non-proprietary Large Language Models for Medical Documentation: A Study in German Healthcare Context

  • Felix Heilmeyer; 
  • Daniel Böhringer; 
  • Thomas Reinhard; 
  • Sebastian Arens; 
  • Lisa Lyssenko; 
  • Christian Haverkamp

ABSTRACT

Background:

The use of Large Language Models (LLMs) as writing assistance for medical professionals is a promising approach to reduce the time required for documentation, but there may be practical, ethical, and legal challenges in many jurisdictions complicating the use of the most powerful commercial LLM solutions.

Objective:

In this study, we assess the feasibility of using non-proprietary LLMs of the Generative Pretrained Transformer (GPT) variety as writing assistance for medical professionals in an on-premise setting with restricted compute resources, generating German medical text.

Methods:

We train four 7B parameter model variants for our task and evaluate their performance using a powerful commercial LLM, namely Anthropic’s Claude-v2 as a rater. Based on this, we select the best performing model and evaluate its practical usability with two independent human raters on real world data.

Results:

In the automated evaluation with Claude-v2 BLOOM-CLP-German, a model trained from scratch on German text, achieved the best results. In the manual evaluation by human experts, 95 of the 102 reports generated by that model were evaluated as usable as is or with only minor changes by both human raters (93.1%).

Conclusions:

The results show that even with restricted compute resources it is possible to generate medical texts that are suitable for documentation in routine clinical practice, but that language issues need to be considered when processing non-English text.


 Citation

Please cite as:

Heilmeyer F, Böhringer D, Reinhard T, Arens S, Lyssenko L, Haverkamp C

Viability of Open Large Language Models for Clinical Documentation in German Health Care: Real-World Model Evaluation Study

JMIR Med Inform 2024;12:e59617

DOI: 10.2196/59617

PMID: 39195570

PMCID: 11373371

Per the author's request the PDF is not available.