Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: May 6, 2018
Open Peer Review Period: May 9, 2018 - Jul 4, 2018
Date Accepted: Jan 26, 2019
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

Alturkistani A, Majeed A, Car J, Brindley D, Wells G, Meinert E

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

JMIR Med Educ 2019;5(1):e10982

DOI: 10.2196/10982

PMID: 30938683

PMCID: 6465971

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

  • Abrar Alturkistani; 
  • Azeem Majeed; 
  • Josip Car; 
  • David Brindley; 
  • Glenn Wells; 
  • Edward Meinert

Background:

This study presents learner perceptions of a pilot massive open online course (MOOC).

Objective:

The objective of this study was to explore data collection approaches to help inform future MOOC evaluations on the use of semistructured interviews and the Kirkpatrick evaluation model.

Methods:

A total of 191 learners joined 2 course runs of a limited trial of the MOOC. Moreover, 7 learners volunteered to be interviewed for the study. The study design drew on semistructured interviews of 2 learners transcribed and analyzed using Braun and Clark’s method for thematic coding. This limited participant set was used to identify how the Kirkpatrick evaluation model could be used to evaluate further implementations of the course at scale.

Results:

The study identified several themes that could be used for further analysis. The themes and subthemes include learner background (educational, professional, and topic significance), MOOC learning (learning achievement and MOOC application), and MOOC features (MOOC positives, MOOC negatives, and networking). There were insufficient data points to perform a Kirkpatrick evaluation.

Conclusions:

Semistructured interviews for MOOC evaluation can provide a valuable in-depth analysis of learners’ experience of the course. However, there must be sufficient data sources to complete a Kirkpatrick evaluation to provide for data triangulation. For example, data from precourse and postcourse surveys, quizzes, and test results could be used to improve the evaluation methodology.


 Citation

Please cite as:

Alturkistani A, Majeed A, Car J, Brindley D, Wells G, Meinert E

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

JMIR Med Educ 2019;5(1):e10982

DOI: 10.2196/10982

PMID: 30938683

PMCID: 6465971

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.