Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Dec 8, 2023
Open Peer Review Period: Dec 8, 2023 - Feb 2, 2024
Date Accepted: Feb 24, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study

Sivarajkumar S, Kelley M, Samolyk-Mazzanti A, Visweswaran S, Wang Y

An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study

JMIR Med Inform 2024;12:e55318

DOI: 10.2196/55318

PMID: 38587879

PMCID: 11036183

An Empirical Evaluation of In-Context Learning Strategies Through Prompt Engineering for Large Language Models in Zero-Shot Clinical Natural Language Processing

  • Sonish Sivarajkumar; 
  • Mark Kelley; 
  • Alyssa Samolyk-Mazzanti; 
  • Shyam Visweswaran; 
  • Yanshan Wang

ABSTRACT

Background:

Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches.

Objective:

The objective of this study is to assess the effectiveness of various prompt engineering techniques, including two newly introduced types - heuristic and ensemble prompts, for zero-shot and few-shot clinical information extraction using pre-trained language models.

Methods:

This comprehensive experimental study evaluated different prompt types (simple prefix, simple cloze, chain of thought, anticipatory, heuristic, and ensemble) across five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. The performance of these prompts was assessed using three state-of-the-art language models: GPT-3.5, BARD(PALM-2), and LLAMA2. The study contrasted zero-shot with few-shot prompting and explored the effectiveness of ensemble approaches.

Results:

The study revealed that task-specific prompt tailoring is vital for high performance of LLMs for zero-shot clinical NLP. In Clinical Sense Disambiguation, GPT-3.5 achieved an accuracy of 0.96 with heuristic prompts and 0.94 in Biomedical Evidence Extraction. Heuristic prompts, alongside chain-of-thought prompts, were highly effective across tasks. Few-shot prompting improved performance in complex scenarios, and ensemble approaches capitalized on multiple prompt strengths. GPT-3.5 consistently outperformed BARD and LLAMA2 across tasks and prompt types.

Conclusions:

This study provides a rigorous evaluation of prompt engineering methodologies and introduces innovative techniques for clinical information extraction, demonstrating the potential of in-context learning in the clinical domain. These findings offer clear guidelines for future prompt-based clinical NLP research, facilitating engagement by non-NLP experts in clinical NLP advancements. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.


 Citation

Please cite as:

Sivarajkumar S, Kelley M, Samolyk-Mazzanti A, Visweswaran S, Wang Y

An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study

JMIR Med Inform 2024;12:e55318

DOI: 10.2196/55318

PMID: 38587879

PMCID: 11036183

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.