Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 14, 2025
Open Peer Review Period: Feb 16, 2025 - Apr 13, 2025
Date Accepted: Aug 4, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Prompt Engineering in Clinical Practice: Tutorial for Clinicians

Liu J, Wang C, Liu S

Prompt Engineering in Clinical Practice: Tutorial for Clinicians

J Med Internet Res 2025;27:e72644

DOI: 10.2196/72644

PMID: 40955776

PMCID: 12439060

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Prompt Engineering in Clinical Practice: A Comprehensive Guide for Clinicians

  • Jialin Liu; 
  • Changyu Wang; 
  • Siru Liu

ABSTRACT

Large language models (LLMs) present a promising avenue for improving healthcare by enhancing clinical decision making. However, their effectiveness heavily relies on the accurate prompt engineering. This review focuses on understanding and optimizing prompt engineering techniques to guide LLMs in producing clinically relevant and accurate responses. The aim is to provide clinicians with the tools to fully utilize LLMs in practice, ensuring patient-centered care and addressing ethical and operational challenges. Key principles of prompt engineering, such as specificity, contextual relevance and iterative refinement, are essential for the effective use of LLMs. Techniques such as zero-shot, few-shot and chain-of-thought prompting are analyzed in detail to provide clinicians with practical insights into how these approaches influence LLM outcomes. The review also introduces a classification system for prompts - manual versus automatic and discrete versus continuous - to help clinicians apply these models more effectively in different clinical scenarios. Despite advances, challenges remain in ensuring data privacy, maintaining clinical accuracy and handling multimodal data. Effective prompt engineering can significantly improve the performance of LLMs in clinical practice by optimizing input design to provide more accurate, contextually relevant and patient-specific outputs. These improvements enable more efficient clinical decision making. Clinicians need to consider privacy, ensure clinical accuracy and integrate adaptive, contextual and personalized prompts into real-time workflows. By refining prompt engineering practices, clinicians can take full advantage of LLM capabilities, ultimately improving patient outcomes and supporting ethical integration into healthcare.


 Citation

Please cite as:

Liu J, Wang C, Liu S

Prompt Engineering in Clinical Practice: Tutorial for Clinicians

J Med Internet Res 2025;27:e72644

DOI: 10.2196/72644

PMID: 40955776

PMCID: 12439060

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.