Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 27, 2019
Open Peer Review Period: Feb 27, 2019 - Mar 6, 2019
Date Accepted: Jan 22, 2020
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Massive Open Online Course Evaluation Methods: Systematic Review

Alturkistani A, Lam C, Foley K, Stenfors T, Blum ER, Van Velthoven MH, Meinert E

Massive Open Online Course Evaluation Methods: Systematic Review

J Med Internet Res 2020;22(4):e13851

DOI: 10.2196/13851

PMID: 32338618

PMCID: 7215503

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Massive Open Online Course Evaluation Methods: Systematic Review

  • Abrar Alturkistani; 
  • Ching Lam; 
  • Kimberley Foley; 
  • Terese Stenfors; 
  • Elizabeth R Blum; 
  • Michelle Helena Van Velthoven; 
  • Edward Meinert

Background:

Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses.

Objective:

The aim of this review was to identify current MOOC evaluation methods to inform future study designs.

Methods:

We systematically searched the following databases for studies published from January 2008 to October 2018: (1) Scopus, (2) Education Resources Information Center, (3) IEEE (Institute of Electrical and Electronic Engineers) Xplore, (4) PubMed, (5) Web of Science, (6) British Education Index, and (7) Google Scholar search engine. Two reviewers independently screened the abstracts and titles of the studies. Published studies in the English language that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection, and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for randomized controlled trials (RCTs) and the National Institutes of Health—National Heart, Lung, and Blood Institute quality assessment tool for cohort observational studies and for before-after (pre-post) studies with no control group.

Results:

The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. In total, 16 studies used a quantitative study design, 11 used a qualitative design, and 6 used a mixed methods study design. In all, 16 studies evaluated learner characteristics and behavior, and 20 studies evaluated learning outcomes and experiences. A total of 12 studies used 1 data source, 11 used 2 data sources, 7 used 3 data sources, 4 used 2 data sources, and 1 used 5 data sources. Overall, 3 studies used more than 3 data sources in their evaluation. In terms of the data analysis methods, quantitative methods were most prominent with descriptive and inferential statistics, which were the top 2 preferred methods. In all, 26 studies with a cross-sectional design had a low-quality assessment, whereas RCTs and quasi-experimental studies received a high-quality assessment.

Conclusions:

The MOOC evaluation data collection and data analysis methods should be determined carefully on the basis of the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods.

International Registered Report:

RR2-10.2196/12087


 Citation

Please cite as:

Alturkistani A, Lam C, Foley K, Stenfors T, Blum ER, Van Velthoven MH, Meinert E

Massive Open Online Course Evaluation Methods: Systematic Review

J Med Internet Res 2020;22(4):e13851

DOI: 10.2196/13851

PMID: 32338618

PMCID: 7215503

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.