Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Aug 10, 2020
Date Accepted: Nov 16, 2020
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Enhancing BERT-based clinical semantic textual similarity model via character-level and entity-level representations
ABSTRACT
Background:
With the popularity of electronic health records (EHRs), the quality of healthcare has been improved. However, there are also some problems caused by EHRs, such as the growing use of copy-and-paste and templates, resulting in EHRs of low quality in content. In order to minimize data redundancy in different documents, Harvard Medical School and Mayo Clinic organized a national natural language processing (NLP) clinical challenge (n2c2) on clinical semantic textual similarity (ClinicalSTS) in 2019. The task of this challenge is to compute the semantic similarity among clinical text snippets.
Objective:
We aim to investigate novel methods to model ClinicalSTS and analyze the results.
Methods:
We propose a semantically enhanced text matching model for the 2019 n2c2/OHNLP challenge on ClinicalSTS. The model includes three representation modules to encode clinical text snippet pairs at different levels: 1) character-level representation module based on convolutional neural network (CNN) to tackle the out of vocabulary (OOV) problem in NLP. 2) sentence-level representation module that adopts a pre-trained language model BERT to encode clinical text snippet pairs. 3) entity-level representation module to model clinical entity information in clinical text snippets. In the case of entity-level representation, we compare two methods. One encodes entities by the entity type label sequence corresponding to text snippet (called entity-I), while the other encodes entities by their representation in MeSH (Medical Subject Headings), a knowledge graph (KG) in the medical domain (called entity-II).
Results:
We conduct experiments on the ClinicalSTS corpus of the 2019 n2c2/OHNLP challenge for model performance evaluation. The model only using BERT for text snippet pair encoding achieves a Pearson correlation coefficient (PCC) of 0.848. When character-level representation and entity-level representation are individually added into our model, the Pearson correlation coefficient increases to 0.857 and 0.854 (entity-I)/0.859 (entity-II), respectively. When both of character-level representation and entity-level representation are added into our model, the Pearson correlation coefficient further increases to 0.861 (entity-I) and 0.868 (entity-II).
Conclusions:
Experimental results show that both character-level information and entity-level information can enhance BERT-based STS model effectively.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.