Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 27, 2019
Open Peer Review Period: Feb 27, 2019 - Mar 6, 2019
Date Accepted: Jan 22, 2020
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Massive Open Online Course Evaluation Methods: Systematic Review

Alturkistani A, Lam C, Foley K, Stenfors T, Blum E, Van Velthoven MH, Meinert E

Massive Open Online Course Evaluation Methods: Systematic Review

J Med Internet Res 2020;22(4):e13851

DOI: 10.2196/13851

PMID: 32338618

PMCID: 7215503

Massive Open Online Course (MOOC) Evaluation Methods: A Systematic Review

  • Abrar Alturkistani; 
  • Ching Lam; 
  • Kimberley Foley; 
  • Terese Stenfors; 
  • Elizabeth Blum; 
  • Michelle Helena Van Velthoven; 
  • Edward Meinert

ABSTRACT

Background:

Massive open online courses (MOOCs) have the potential for broad education impact due to many learners undertaking these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses.

Objective:

This review aims to identify current MOOC evaluation methods in order to inform future study designs.

Methods:

We systematically searched the following databases: (1) SCOPUS; (2) Education Resources Information Center (ERIC); (3) IEEE Xplore; (4) Medline/PubMed; (5) Web of Science; (6) British Education Index and (7) Google Scholar search engine for studies from January 2008 until October 2018. Two reviewers independently screened abstracts and titles of the studies. Published studies in English that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for RCTs, the NIH - National Heart, Lung and Blood Institute quality assessment tool for cohort observational studies, and for “Before-After (Pre-Post) Studies With No Control Group”.

Results:

The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. Studies mostly had a cross-sectional design evaluating one version of a MOOC. We found that studies mostly had a learner-focused, teaching-focused or platform-focused motivation to evaluate the MOOC. The most used data collection methods were surveys, learning management system data and quiz grades and the most used data analysis methods were descriptive and inferential statistics. The methods for evaluating the outcomes of these courses were diverse and unstructured. Most studies with cross-sectional design had a low-quality assessment, whereas randomized controlled trials and quasi-experimental studies received higher quality assessment.

Conclusions:

MOOC evaluation data collection and data analysis methods should be determined carefully based on the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced by using pre-MOOC measures for comparison or by controlling for confounding variables. ¬¬¬Future MOOC evaluations should consider using more diverse data sources and data analysis methods. Clinical Trial: N/A


 Citation

Please cite as:

Alturkistani A, Lam C, Foley K, Stenfors T, Blum E, Van Velthoven MH, Meinert E

Massive Open Online Course Evaluation Methods: Systematic Review

J Med Internet Res 2020;22(4):e13851

DOI: 10.2196/13851

PMID: 32338618

PMCID: 7215503

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.