Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Nov 15, 2023
Date Accepted: Feb 14, 2024

The final, peer-reviewed published version of this preprint can be found here:

An Entity Extraction Pipeline for Medical Text Records Using Large Language Models: Analytical Study

Wang W, Ma Y, Bi W, Lv H, Li Y

An Entity Extraction Pipeline for Medical Text Records Using Large Language Models: Analytical Study

J Med Internet Res 2024;26:e54580

DOI: 10.2196/54580

PMID: 38551633

PMCID: 11015372

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

An Unsupervised Entity Extraction Approach for Medical Text Records Leveraging Large Language Model

  • Wei Wang; 
  • Yinyao Ma; 
  • Wenshuai Bi; 
  • Hanlin Lv; 
  • Yuxiang Li

ABSTRACT

Background:

Extracting valuable information from clinical text data is critical in disease progression studies. Traditional methods are often unable to cope with the complexity and volume of such data. The emergence of Large Language Models (LLMs) has opened new avenues, but they are challenged by critical issues such as data security and feature hallucination.

Objective:

The primary objective of this study is to utilize a modular LLM approach to efficiently and accurately extract features from clinical text data, addressing the specific challenges of data security and feature hallucination, and improving upon the limitations of traditional methods.

Methods:

In this study, we introduced a modular LLM approach to extract features from patient admission records. The process was divided into distinct steps: concept extraction, aggregation, question generation, corpus extraction, and Q&A scale extraction. Our method was evaluated on a dataset comprising 25,709 pregnancy cases from the People's Hospital of Guangxi Zhuang Autonomous Region, China, utilizing two low-parameter LLMs, Qwen-14B-Chat (QWEN) and Baichuan2-13B-Chat (BAICHUAN).

Results:

The approach achieved high precision in features extraction, with QWEN and BAICHUAN showing average accuracies of 95.52% and 95.86%, respectively. The models demonstrated low null ratios (<0.21%) and varied time consumption. We also experimented the INT4-quantified version of QWEN (QWEN (INT4)) on a consumer-grade GPU, achieved even better performance (97.28% accuracy and a 0% null ratio).

Conclusions:

This study demonstrates the effectiveness of a modular LLM approach in extracting clinical text data with high accuracy and efficiency. By breaking down the extraction process into manageable components, this approach offers a promising solution for textual features extraction from patient documentation.


 Citation

Please cite as:

Wang W, Ma Y, Bi W, Lv H, Li Y

An Entity Extraction Pipeline for Medical Text Records Using Large Language Models: Analytical Study

J Med Internet Res 2024;26:e54580

DOI: 10.2196/54580

PMID: 38551633

PMCID: 11015372

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.