Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: May 16, 2023
Date Accepted: Oct 3, 2023

The final, peer-reviewed published version of this preprint can be found here:

Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation

Sugimoto K, Wada S, Konishi S, Okada K, Manabe S, Matsumura Y, Takeda T

Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation

JMIR Med Inform 2023;11:e49041

DOI: 10.2196/49041

PMID: 37991979

PMCID: 10686535

Structuring Japanese Radiology Reports: Extracting Clinical Information Using a Two-stage Deep Learning Approach

  • Kento Sugimoto; 
  • Shoya Wada; 
  • Shozo Konishi; 
  • Katsuki Okada; 
  • Shiro Manabe; 
  • Yasushi Matsumura; 
  • Toshihiro Takeda

ABSTRACT

Background:

Radiology reports are usually written in a free-text format, which makes it challenging to reuse the reports.

Objective:

For secondary use, we develop an end-to-end deep learning system for extracting clinical information and converting it into a structured format.

Methods:

Our system mainly consists of two deep learning modules: entity extraction and relation extraction. For each module, state-of-the-art deep learning models were applied. We trained and evaluated the models using 1,040 in-house chest and abdomen computed tomography (CT) reports annotated by medical experts. We also evaluated the performance of the entire pipeline of our system. In addition, the ratio of annotated entities in the reports was measured to validate the coverage of the clinical information with our information model.

Results:

The micro F1-scores of our best-performing model for entity extraction and relation classification were 96.1% and 97.4%, respectively. The micro F1-score of the end-to-end system, which is a measure of the performance of the entire pipeline of our system, was 91.9%. Our system showed encouraging results in the conversion of free-text radiology reports into a structured format. The coverage of clinical information in the reports was 96.2%.

Conclusions:

Our end-to-end deep system can extract of clinical information from chest and abdomen CT reports accurately and comprehensively.


 Citation

Please cite as:

Sugimoto K, Wada S, Konishi S, Okada K, Manabe S, Matsumura Y, Takeda T

Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation

JMIR Med Inform 2023;11:e49041

DOI: 10.2196/49041

PMID: 37991979

PMCID: 10686535

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.