Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jun 21, 2020
Date Accepted: Jan 31, 2021
Date Submitted to PubMed: Feb 5, 2021
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
An Intrinsic Evaluation of Learned COVID-19 Concepts using Open-Source Word Embedding Sources
ABSTRACT
Background:
Scientists are developing new computational methods and prediction models to better clinically understand COVID-19 prevalence, treatment efficacy, and patient outcomes. These efforts could be improved by leveraging documented, COVID-19-related symptoms, findings, and disorders from clinical text sources in the electronic health record. Word embeddings can identify terms related to these clinical concepts from both the biomedical and non-biomedical domains and are being shared with the open-source community. However, it’s unclear how useful openly-available word embeddings are for developing lexicons for COVID-19-related concepts.
Objective:
Given an initial lexicon of COVID-19-related terms, characterize the returned terms by similarity across various, open-source word embeddings and determine common semantic and syntactic patterns between the COVID-19 queried terms and returned terms specific to word embedding source.
Methods:
We compared 7 openly-available word embedding sources. Using a series of COVID-19-related terms for associated symptoms, findings, and disorders, we conducted an inter-annotator agreement study to determine how accurately the most semantically similar returned terms could be classified according to semantic types (negated, synonyms, symptoms/signs, disease/disorders, hyponym, hypernym, qualifiers, anatomical locations, therapeutics, or other) by three annotators. We also conducted a qualitative study of COVID-19 queried terms and their returned terms to identify useful patterns for constructing lexicons
Results:
We observed high, pairwise inter-annotator agreement (Cohen’s Kappa) for symptoms (0.86 to 0.99), findings (0.93 to 0.99), and disorders (0.93 to 0.99). Word embedding sources generated based on characters tend to return more lexical variants and synonyms; in contrast, embeddings based on tokens more often return a variety of semantic types. Word embedding sources queried using an adjective phrase compared to a single term (e.g., dry cough vs. cough; muscle pain vs. pain) are more likely to return qualifiers of the same semantic type (e.g., “dry” returns consistency qualifiers like “wet”, “runny”).
Conclusions:
Word embeddings are a valuable technology for learning terms, including synonyms. When leveraging openly-available word embedding sources, choices made for the construction of the word embeddings can significantly influence the phrases returned.
Citation