Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Aug 15, 2024
Date Accepted: Nov 30, 2024
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Classifying Unstructured Text in Electronic Health Records for Mental Health Prediction Models using a Large Language Model
ABSTRACT
Background:
Prediction models have demonstrated a range of applications across medicine, including using electronic health record (EHR) data to identify hospital readmission and mortality risk. Large language models (LLMs) can transform unstructured EHR text into structured features, which can then be integrated into statistical prediction models, ensuring that the results are both clinically meaningful and interpretable.
Objective:
This study aims to compare the classification decisions made by clinical experts with those generated by a state-of-the-art large language model (LLM), using terms extracted from a large EHR dataset of individuals with mental health disorders seen in emergency departments.
Methods:
Using a dataset from the EHR systems of more than 50 healthcare provider organizations in the United States from 2016-2021, we extracted all clinical terms that appeared in at least 1,000 records of individuals admitted to the emergency department for a mental health-related problem from a source population of over six million emergency department episodes. Two experienced mental health clinicians (one medically-trained psychiatrist and one clinical psychologist) reached consensus on the classification of EHR terms and diagnostic codes into categories. We evaluated an LLM’s agreement with clinical judgement across three classification tasks: (1) classify terms into “mental health” or “physical health” (2) classify mental health terms into one of 42 pre-specified categories, and (3) classify physical health terms into one of 19 pre-specified broad categories.
Results:
There was high agreement between the LLM and clinical experts when categorizing 4,553 terms as “mental health” or “physical health” (κ = 0.77). There was considerable variability in LLM-clinician agreement on classification of physical health terms (average κ = .66, range .34-.86) and mental health terms (average κ = .61, κ range 0 – 1).
Conclusions:
The LLM displayed high agreement with clinical experts when classifying EHR terms into certain mental health or physical health categories. However, agreement with clinical experts varied considerably within both mental health and physical health category sets. Importantly, the use of LLMs presents an alternative to manual human coding, presenting great potential to create interpretable features for prediction models.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.