Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Oct 29, 2019
Open Peer Review Period: Oct 29, 2019 - Dec 24, 2019
Date Accepted: Jun 11, 2020
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study

Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK

Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study

J Med Internet Res 2020;22(8):e16792

DOI: 10.2196/16792

PMID: 32749999

PMCID: 7435628

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Bringing Home Cognitive Assessment: Initial Validation of Unsupervised Web-based Cognitive Testing on the Cambridge Neuropsychological Test Automated Battery (CANTAB) using a within-subjects counterbalanced design

  • Rosa Backx; 
  • Caroline Skirrow; 
  • Pasquale Dente; 
  • Jennifer H Barnett; 
  • Francesca K Cormack

ABSTRACT

Background:

Computerised assessments already confer advantages for deriving accurate and reliable measures of cognitive function, including test standardisation, accuracy of response recordings and automated scoring. Web-based cognitive assessment could improve accessibility and flexibility of research and clinical assessment, widen participation and promote research recruitment whilst simultaneously reducing costs. However, differences between lab-based and unsupervised cognitive assessment may influence task performance. Validation is required to establish reliability, equivalency and agreement with respect to gold-standard lab-based assessments.

Objective:

The current study validates an unsupervised web-based version of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study tests: 1) reliability, the correlation between measurements across participants, 2) equivalence, the extent to which test results in different settings produce similar, or by contrast, different overall results, and 3) agreement, by quantifying acceptable limits to bias and differences between the different measurement environments.

Methods:

Fifty-one healthy adults (32 women, 19 men; mean age 37 years) completed two testing sessions on average one week apart. Assessments included equivalent tests of emotion recognition (Emotion Recognition Task: ERT), visual recognition (Pattern Recognition Memory: PRM), episodic memory (Paired Associate Learning: PAL), working memory and spatial planning (Spatial Working Memory: SWM; One-Touch Stockings of Cambridge: OTS), and sustained attention (Rapid Visual Information Processing: RVP). Participants were randomly allocated to one of two groups, either assessed in-person first (n=33) or using web-based assessment first (n=18). Performance measures (errors, correct trials, response sensitivity), and median reaction times were extracted. Analyses included intra-class correlations (ICC) to examine reliability, linear mixed models and Bayesian paired samples t-tests to test for equivalence, and Bland Altman plots to examine agreement.

Results:

Intra-class correlation coefficients ranged from 0.23-0.67, with high correlations in three performance measures (from PAL, SWM and RVP tasks, ≥0.60). High intra-class correlations were also seen for reaction time measures from two tasks (PRM and ERT tasks, ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance measures did not differ between assessment modalities, and generally showed satisfactory agreement.

Conclusions:

Our results support the use of CANTAB performance measures (errors, correct trials, response sensitivity) in unsupervised web-based assessments. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variation in home computer hardware. Results underline the importance of examining more than one index to ascertain validity, since high correlations can be present in the context of consistent, systematic differences which are a product of differences between measurement environments. Further work is now needed validate web-based assessments in clinical populations, and in larger samples to improve sensitivity for detecting subtler differences between test settings.


 Citation

Please cite as:

Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK

Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study

J Med Internet Res 2020;22(8):e16792

DOI: 10.2196/16792

PMID: 32749999

PMCID: 7435628

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.