Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Dec 13, 2017
Open Peer Review Period: Dec 13, 2017 - Apr 26, 2018
Date Accepted: Jun 28, 2018
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Web-Based Information Infrastructure Increases the Interrater Reliability of Medical Coders: Quasi-Experimental Study

Varghese J, Sandmann S, Dugas M

Web-Based Information Infrastructure Increases the Interrater Reliability of Medical Coders: Quasi-Experimental Study

J Med Internet Res 2018;20(10):e274

DOI: 10.2196/jmir.9644

PMID: 30322834

PMCID: 6231825

Web-Based Information Infrastructure Increases the Interrater Reliability of Medical Coders: Quasi-Experimental Study

  • Julian Varghese; 
  • Sarah Sandmann; 
  • Martin Dugas

ABSTRACT

Background:

Medical coding is essential for standardized communication and integration of clinical data. The Unified Medical Language System by the National Library of Medicine is the largest clinical terminology system for medical coders and Natural Language Processing tools. However, the abundance of ambiguous codes leads to low rates of uniform coding among different coders.

Objective:

The objective of our study was to measure uniform coding among different medical experts in terms of interrater reliability and analyze the effect on interrater reliability using an expert- and Web-based code suggestion system.

Methods:

We conducted a quasi-experimental study in which 6 medical experts coded 602 medical items from structured quality assurance forms or free-text eligibility criteria of 20 different clinical trials. The medical item content was selected on the basis of mortality-leading diseases according to World Health Organization data. The intervention comprised using a semiautomatic code suggestion tool that is linked to a European information infrastructure providing a large medical text corpus of >300,000 medical form items with expert-assigned semantic codes. Krippendorff alpha (Kalpha) with bootstrap analysis was used for the interrater reliability analysis, and coding times were measured before and after the intervention.

Results:

The intervention improved interrater reliability in structured quality assurance form items (from Kalpha=0.50, 95% CI 0.43-0.57 to Kalpha=0.62 95% CI 0.55-0.69) and free-text eligibility criteria (from Kalpha=0.19, 95% CI 0.14-0.24 to Kalpha=0.43, 95% CI 0.37-0.50) while preserving or slightly reducing the mean coding time per item for all 6 coders. Regardless of the intervention, precoordination and structured items were associated with significantly high interrater reliability, but the proportion of items that were precoordinated significantly increased after intervention (eligibility criteria: OR 4.92, 95% CI 2.78-8.72; quality assurance: OR 1.96, 95% CI 1.19-3.25).

Conclusions:

The Web-based code suggestion mechanism improved interrater reliability toward moderate or even substantial intercoder agreement. Precoordination and the use of structured versus free-text data elements are key drivers of higher interrater reliability.


 Citation

Please cite as:

Varghese J, Sandmann S, Dugas M

Web-Based Information Infrastructure Increases the Interrater Reliability of Medical Coders: Quasi-Experimental Study

J Med Internet Res 2018;20(10):e274

DOI: 10.2196/jmir.9644

PMID: 30322834

PMCID: 6231825

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.