Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: May 6, 2018
Open Peer Review Period: May 9, 2018 - Jul 4, 2018
Date Accepted: Jan 26, 2019
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

Alturkistani A, Majeed A, Car J, Brindley D, Wells G, Meinert E

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

JMIR Med Educ 2019;5(1):e10982

DOI: 10.2196/10982

PMID: 30938683

PMCID: 6465971

Data collection approaches to enable evaluation of a Massive Open Online Course (MOOC) about data science for continuing education in healthcare

  • Abrar Alturkistani; 
  • Azeem Majeed; 
  • Josip Car; 
  • David Brindley; 
  • Glenn Wells; 
  • Edward Meinert

ABSTRACT

Background:

This paper presents learner perceptions of a pilot Massive Open Online Course (MOOC).

Objective:

The aim of this study was to explore data collection approaches to help inform future MOOC evaluations on the use of semi-structured interviews and the Kirkpatrick evaluation model.

Methods:

191 learners joined two course runs of a limited trial of the MOOC. Seven learners volunteered to be interviewed for the study. The study design drew on semi-structured interviews of 2 learners transcribed and analysed using Braun and Clark's method for thematic coding. This limited participant set was used to identify how the Kirkpatrick evaluation model could be used to evaluate further implementations of the course at scale.

Results:

The study identified several themes that could be used for further analysis. The themes and sub-themes include: Learner background (educational, professional, topic significance), MOOC learning (learning achievement, MOOC application) and MOOC features (MOOC positives, MOOC negatives, networking). There was not sufficient data points to perform a Kirkpatrick evaluation.

Conclusions:

Semi-structured interviews for MOOC evaluation can provide a valuable in-depth analysis of learners’ experience of the course. However, there must be sufficient data sources to complete a Kirkpatrick evaluation to provide for data triangulation. For example, data from pre-course and post-course surveys, quizzes and/or test results could be used to improve the evaluation methodology. Clinical Trial: The evaluation received ethical approval from the Imperial College London Education Ethics Review Process (EERP) (EERP1617-030).


 Citation

Please cite as:

Alturkistani A, Majeed A, Car J, Brindley D, Wells G, Meinert E

Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study

JMIR Med Educ 2019;5(1):e10982

DOI: 10.2196/10982

PMID: 30938683

PMCID: 6465971

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.