Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 6, 2023
Date Accepted: Aug 20, 2024

The final, peer-reviewed published version of this preprint can be found here:

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study

Lalor JP, Levy DA, Jordan HS, Smirnova JK, Yu H

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study

J Med Internet Res 2024;26:e49704

DOI: 10.2196/49704

PMID: 39405109

PMCID: 11522659

Evaluating Expert–Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: An Observational Study

  • John P Lalor; 
  • David A Levy; 
  • Harmon S Jordan; 
  • Jenni Kim Smirnova; 
  • Hong Yu

ABSTRACT

Background:

Studies have shown that patients have difficulty understanding medical jargon in electronic health record (EHR) notes. In creating the NoteAid dictionary of medical jargon for patients, a panel of medical experts selected terms they perceived as needing definitions for patients.

Objective:

To determine whether experts and laypeople agree on what constitutes jargon.

Methods:

Using an observational study design, we evaluated the agreement between 6 medical experts and 270 untrained laypeople (crowdsource workers) in jargon term identification in 20 sentences from EHR notes. They contained 325 potential jargon terms.

Results:

There was good agreement among medical experts (Fleiss’ kappa = 0.781, 95% CI: 0.753–0.809) and fair agreement among laypeople (Fleiss’ kappa = 0.590, 95% CI: 0.589–0.591). The medical experts had high sensitivity (91.7%, 95% CI: 90.1–93.3%) and specificity (88.2%, 95% CI: 86.0–90.5%) in identifying jargon terms as determined by the laypeople. The proportion of terms marked as jargon by different demographic groups among the laypeople ranged from 17.7% (95% CI: 10.7–24.7%) to 30.9% (95% CI: 28.1–33.8%).

Conclusions:

We showed that medical experts could accurately identify jargon terms for annotation that would be useful for laypeople.


 Citation

Please cite as:

Lalor JP, Levy DA, Jordan HS, Smirnova JK, Yu H

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study

J Med Internet Res 2024;26:e49704

DOI: 10.2196/49704

PMID: 39405109

PMCID: 11522659

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.