Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: May 22, 2019
Date Accepted: Oct 22, 2019

The final, peer-reviewed published version of this preprint can be found here:

Efficient Reuse of Natural Language Processing Models for Phenotype-Mention Identification in Free-text Electronic Medical Records: A Phenotype Embedding Approach

Wu H, Hodgson K, Dyson S, Morley KI, Ibrahim ZM, Iqbal E, Stewart R, Dobson RJ, Sudlow C

Efficient Reuse of Natural Language Processing Models for Phenotype-Mention Identification in Free-text Electronic Medical Records: A Phenotype Embedding Approach

JMIR Med Inform 2019;7(4):e14782

DOI: 10.2196/14782

PMID: 31845899

PMCID: 6938594

Efficiently Reusing Natural Language Processing Models for Phenotype Identification in Free-text Electronic Medical Records: Methodological Study

  • Honghan Wu; 
  • Karen Hodgson; 
  • Sue Dyson; 
  • Katherine I Morley; 
  • Zina M Ibrahim; 
  • Ehtesham Iqbal; 
  • Robert Stewart; 
  • Richard JB Dobson; 
  • Cathie Sudlow

ABSTRACT

Background:

Many efforts have been put into the use of automated approaches, such as natural language processing (NLP), to mine or extract data from free-text medical records to construct comprehensive patient profiles for delivering better health-care. Reusing NLP models in new settings, however, remains cumbersome - requiring validation and/or retraining on new data iteratively to achieve convergent results.

Objective:

The aim of this work is to minimise the effort involved in reusing NLP models on free-text medical records.

Methods:

We formally define and analyse the model adaptation problem in phenotype identification tasks. We identify “duplicate waste” and “imbalance waste”, which collectively impede efficient model reuse. We propose a concept embedding based approach to minimise these sources of waste without the need for labelled data from new settings.

Results:

We conduct experiments on data from a large mental health registry to reuse NLP models in four phenotype identification tasks. The proposed approach can choose the best model for a new task, identifying up to 76% of phenotype mentions without the need for validation and model retraining, and with very good performance (93-97% accuracy). It can also provide guidance for validating and retraining the selected model for novel language patterns in new tasks, saving around 80% of the effort required in “blind” model-adaptation approaches.

Conclusions:

Adapting pre-trained NLP models for new tasks can be more efficient and effective if the language pattern landscapes of old settings and new settings can be made explicit and comparable. Our experiments show that the phenotype embedding approach is an effective way to model language patterns for phenotype identification tasks and that its use can guide efficient NLP model reuse.


 Citation

Please cite as:

Wu H, Hodgson K, Dyson S, Morley KI, Ibrahim ZM, Iqbal E, Stewart R, Dobson RJ, Sudlow C

Efficient Reuse of Natural Language Processing Models for Phenotype-Mention Identification in Free-text Electronic Medical Records: A Phenotype Embedding Approach

JMIR Med Inform 2019;7(4):e14782

DOI: 10.2196/14782

PMID: 31845899

PMCID: 6938594

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.