Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Dec 10, 2020
Date Accepted: May 24, 2021

The final, peer-reviewed published version of this preprint can be found here:

Recursive Partitioning vs Computerized Adaptive Testing to Reduce the Burden of Health Assessments in Cleft Lip and/or Palate: Comparative Simulation Study

Harrison CJ, Sidey-Gibbons CJ, Klassen AF, Wong Riff KWY, Furniss D, Swan MC, Rodrigues JN

Recursive Partitioning vs Computerized Adaptive Testing to Reduce the Burden of Health Assessments in Cleft Lip and/or Palate: Comparative Simulation Study

J Med Internet Res 2021;23(7):e26412

DOI: 10.2196/26412

PMID: 34328443

PMCID: 8367147

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Recursive partitioning vs computerized adaptive testing to reduce the burden of health assessment in cleft lip and/or palate

  • Conrad J. Harrison; 
  • Chris J. Sidey-Gibbons; 
  • Anne F. Klassen; 
  • Karen W. Y. Wong Riff; 
  • Dominic Furniss; 
  • Marc C. Swan; 
  • Jeremy N. Rodrigues

ABSTRACT

Background:

Computerized adaptive testing (CAT) has been shown to deliver short, accurate and personalized versions of the CLEFT-Q patient-reported outcome measure (PROM) for children and young adults born with a cleft lip and/or palate. Decision trees may be able to integrate clinician-reported data (e.g. age, gender, cleft type and planned treatments) to make these assessments even shorter and/or more accurate.

Objective:

We aimed to create decision tree models that incorporated clinician-reported data into adaptive CLEFT-Q assessments, and compare their accuracy to traditional CAT models.

Methods:

We used relevant clinician-reported data and patient-reported item responses from the CLEFT-Q field test to train and test decision tree models using recursive partitioning. We compared the prediction accuracy of decision trees to CAT assessments of similar length. Participant scores from the full-length questionnaire were used as ground truth. Accuracy was assessed through Pearson’s correlation coefficient of predicted and ground truth scores, mean absolute error, root mean squared error, and a two-tailed Wilcoxon signed-rank test comparing absolute error.

Results:

Decision trees demonstrated poorer accuracy than CAT comparators, and generally made data splits based on item responses, rather than clinician-reported data.

Conclusions:

When predicting CLEFT-Q scores, individual item responses are generally more informative than clinician-reported data. Decision trees that make binary splits are at risk of underfitting polytomous PROM scale data and demonstrated poorer performance than CATs in this study.


 Citation

Please cite as:

Harrison CJ, Sidey-Gibbons CJ, Klassen AF, Wong Riff KWY, Furniss D, Swan MC, Rodrigues JN

Recursive Partitioning vs Computerized Adaptive Testing to Reduce the Burden of Health Assessments in Cleft Lip and/or Palate: Comparative Simulation Study

J Med Internet Res 2021;23(7):e26412

DOI: 10.2196/26412

PMID: 34328443

PMCID: 8367147

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.