Currently accepted at: JMIR AI
Date Submitted: Sep 18, 2025
Open Peer Review Period: Nov 4, 2025 - Dec 30, 2025
Date Accepted: Jan 18, 2026
(closed for review but you can still tweet)
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/84362
The final accepted version (not copyedited yet) is in this tab.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
User Experience with On-Premise Large Language Models in a German University Medicine: Insights from a Survey-Based Evaluation
ABSTRACT
Background:
Large language models are increasingly used by employees at university hospitals for information retrieval or decision support. Self-hosted on-premise systems provide a secure environment, conform with data privacy and security regulations for handling sensitive personal data. By automation of standard procedures through LLM application, time-consuming administrative tasks can be drastically reduced and analysis of large data sets facilitated.
Objective:
The objective of our study was to gather feedback from registered AI users on the usability and common use cases of the on-premise LLM infrastructure we established at the University Medicine Magdeburg, in order to optimize the models to the needs at our facility.
Methods:
We developed an online questionnaire to which registered AI users were given access and were informed via email.
Results:
Of 322 registered AI users, 98 participated in the user survey. Filtering incomplete responses, results from 91 participants remained for further analysis. Speed and quality received overall high approval rates. A majority of users utilized the platform at least once per week. Forty-four percent reported to save at least 30 minutes of work per week by using our AI platform. A diverse set of use cases could be observed, dependent on the users’ professions, e.g. healthcare and research professionals using the AI platform much more often for tasks of creation or analysis compared to administrative staff.
Conclusions:
Our data indicates that implementation of a self-hosted on-premise LLM has positive influence on the diverse group of professionals working at a university hospital, saving time and meeting their individual needs.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.