Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Jun 11, 2024
Open Peer Review Period: Jun 24, 2024 - Aug 19, 2024
Date Accepted: Nov 19, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

Zhuang Y, Zhang J, Li X, Liu C, Yu Y, Dong W, He K

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

JMIR Med Inform 2025;13:e63020

DOI: 10.2196/63020

PMID: 39761555

PMCID: 11747532

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Autonomous International Classification of Diseases Coding Using Pre-Trained Language Models and Advanced Prompt Learning Techniques

  • Yan Zhuang; 
  • Junyan Zhang; 
  • Xiuxing Li; 
  • Chao Liu; 
  • Yue Yu; 
  • Wei Dong; 
  • Kunlun He

ABSTRACT

Background:

Due to the limitations posed by small datasets, diverse writing styles, unstructured clinical records, and the necessity of semi-manual preprocessing, machine learning techniques for real-time ICD coding continue to face significant challenges.

Objective:

In this study, we developed a fully automatic pipeline from long free text to standard ICD codes, which integrated medical pre-trained and keyword filtration BERT, fine-tuning, and task-specific prompt learning with mixed templates and soft verbalizers.

Methods:

We integrated four components into our framework: a medical pre-trained BERT, a keyword filtration BERT, a fine-tuning phase, and task-specific prompt learning which utilized mixed templates and soft verbalizers. This framework was validated on a multi-center medical dataset for the automated ICD coding of 13 common cardiovascular diseases. Its performance was compared against RoBERTa, XLNet, and different BERT-based fine-tuning pipelines. Additionally, we evaluated the performance of our framework under different prompt learning and fine-tuning settings. Further, few-shot learning was conducted to assess the feasibility and efficacy of our framework in scenarios involving small to mid-sized datasets.

Results:

Compared to traditional pre-training and fine-tuning pipelines, our approach achieved a significantly higher micro-F1 score of 0.838 and a macro-AUC of 0.958. Among different prompt learning setups, the mixed template and soft verbalizer combination yielded the best performance. Few-shot experiments indicated that performance stabilized and peaked at 500 shots.

Conclusions:

These findings underscore the effectiveness and superior performance of prompt learning and fine-tuning for subtasks within pre-trained language models in medical practice. Our real-time ICD coding pipeline effectively extracts detailed medical free-text into standardized labels, with potential applications in clinical decision-making.


 Citation

Please cite as:

Zhuang Y, Zhang J, Li X, Liu C, Yu Y, Dong W, He K

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

JMIR Med Inform 2025;13:e63020

DOI: 10.2196/63020

PMID: 39761555

PMCID: 11747532

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.