Accepted for/Published in: JMIR Formative Research
Date Submitted: May 1, 2024
Date Accepted: Nov 24, 2024
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
An Assistant for ICD-10 Coding: Leveraging GPT-4 and RoBERTa for Term Extraction and Code Description Analysis
ABSTRACT
Background:
The International Classification of Diseases (ICD), developed by the WHO, standardizes health condition coding to support healthcare policy, research, and billing, but AI automation, while promising, still underperforms compared to human accuracy and lacks the explainability needed for adoption in medical settings.
Objective:
The potential of Large Language Models (LLMs) is explored for assisting medical coders in the International Classification of Diseases-10 (ICD-10) coding. The study aims on augmenting human coding by initially identifying lead terms and utilizing RAG-based methods for computer-assisted coding enhancement.
Methods:
The explainability dataset from the CodiEsp challenge (CodiEsp-X) was used, featuring 1000 Spanish clinical cases annotated with ICD-10 codes. From CodiEsp-X, a dataset was created using GPT-4, where full textual evidence annotations were replaced with lead term annotations. Phase 1 consisted of fine-tuning a named entity recognition (NER) RoBERTa transformer model for lead term extraction. In Phase 2, the ICD code for identified lead terms was assigned using GPT-4 and ICD code descriptions, a Retrieval-Augmented Generation (RAG) approach.
Results:
The fine-tuned RoBERTa achieved an overall F1 score of 0.80 for ICD lead term extraction on the new CodiEsp-X-lead dataset in phase 1. In phase 2, the GPT-4 generated code descriptions improved the recall from 55.1% to 82.3% for procedure code lookups in a code description database. However, relying solely on GPT-4 prompting and code descriptions as a resource to assign the correct ICD-10 code to identified lead terms resulted in poor performance on the CodiEsp-X task, with an F1 score of 0.305.
Conclusions:
While the fine-tuning on training data might have improved the ICD coding, it was intentionally omitted to prioritize generalizability.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.