Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 14, 2025
Date Accepted: Jan 1, 2026
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Tailoring Discharge Summaries to Caregivers' Needs: part 1 of the 'Framework & Implementation of AI Tools' (FRAIT) Project
ABSTRACT
Background:
Discharge summaries are critical for continuity of care but often lack clarity and personalization, making it difficult for healthcare providers to retrieve essential information. While large language models (LLMs) offer potential for automating summary generation, their effectiveness depends heavily on the quality and contextual relevance of the prompts used.
Objective:
The objective of this study was to develop and evaluate a human-centered, replicable framework for creating individualized prompts that guide LLMs in generating discharge summaries tailored to the specific needs of healthcare providers.
Methods:
A multidisciplinary workshop was conducted at Ghent University Hospital with 26 healthcare providers from five institutions, including hospitals and general practitioner networks. Participants brainstormed ideal discharge summary formats, generating 170 ideas categorized into themes such as structure, medical history, medication, and follow-up. These insights informed the development of a 110-item structured questionnaire, distributed to 33 participants. Responses were used to generate personalized and generic prompts, refined using the CO-STAR framework (Context, Objective, Style, Tone, Audience, Response).
Results:
Structure/form (24%) and follow-up (16%) were the most emphasized categories in the workshop. The questionnaire confirmed the importance of follow-up and medical history sections. Prompts were generated per participant and by provider type, incorporating frequently selected responses. The CO-STAR framework improved prompt clarity and alignment with clinical expectations. Communication emerged as a new category during the workshop and was universally valued in the questionnaire.
Conclusions:
This study presents a novel, systematic approach to prompt engineering in clinical AI applications. By translating qualitative input into structured, individualized prompts, the framework enhances the usability and relevance of AI-generated discharge summaries. It offers a scalable model for integrating human-centered design into LLM deployment in healthcare, supporting more accurate, context-aware clinical documentation.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.