Accepted for/Published in: JMIR Medical Education
Date Submitted: Dec 13, 2018
Open Peer Review Period: Dec 17, 2018 - Jan 10, 2019
Date Accepted: Jan 30, 2019
(closed for review but you can still tweet)
How do we evaluate postgraduate medical e-learning? A systematic review.
ABSTRACT
Background:
There is a fast progression of e-learning in postgraduate medical education. But we tend to evaluate it only on its primary outcome or its learning aim. The effect of e-learning however also depends on the instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design, so we can identify the gaps in the current literature and define the next steps needed to continue to evaluate postgraduate medical e-learning.
Objective:
To identity the outcomes and methods used to evaluate postgraduate medical education and compare those methods and outcomes to each other.
Methods:
We performed a systematic literature review using the Web of Science, PubMed, ERIC and Cinahl databases. Studies using postgraduates as participants and evaluating any form of e-learning were included. Studies without any evaluation outcome (for example, just an e-learning description) were excluded.
Results:
The initial search identified 5,973 articles, of which we used 442 for our analysis. The types of studies were trials, prospective cohorts, case reports and reviews. The primary outcomes of the included studies were knowledge, skills and attitude. There were twelve instruments to evaluate a specific primary outcome, like laparoscopic skills or stress related to training. The secondary outcomes were mainly evaluating satisfaction, motivation, efficiency and usefulness. We found 13 e-learning design methods in 19 studies (4%). The methods evaluated usability, motivational characteristics and the use of learning styles or were based on instructional design theories, such as Gagne's instructional design, the Heidelberg inventory, Kern's curriculum development steps and a scale based on the cognitive load theory. Finally, two instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning.
Conclusions:
Evaluating the effect of e-learning design is complicated and diverse. There are many ways to do so and there are probably many ways to do so correctly. The current literature shows us, however, that we are still searching for any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated and tested. This is the only way to compare the effects of e-learning and for the authors of e-learning to keep improving their product.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.