Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Aug 6, 2021
Date Accepted: Dec 4, 2021
State-of-the-art of Dashboards on Clinical Indicator Data to support Reflection on Practice: A Scoping Review
ABSTRACT
Background:
There is an increasing interest to use routinely collected electronic health data to support reflective practice and long-term professional learning. Studies have evaluated the impact of dashboards on clinician decision-making, task completion time, user satisfaction, and adherence to clinical guidelines.
Objective:
The scoping review will summarize the literature on dashboards based on patient administrative, medical, and surgical data for clinicians to support reflective practice.
Methods:
A scoping review was conducted using the Arksey and O’Malley framework. A search was conducted in five electronic databases (MEDLINE, EMBASE, Scopus, ACM Digital Library, Web of Science) to identify studies that meet the inclusion criteria. Study selection and characterization were performed by two independent reviewers. One reviewer extracted the data that was analyzed descriptively to map the available evidence.
Results:
A total of 18 dashboards from eight countries were assessed. Purposes for the dashboards were designed for performance improvement (n=10), to support quality and safety initiatives (n=6), and management and operations (n=4). Data visualizations were primarily designed for team use (n=12) rather than individual clinicians (n=4). Evaluation methods varied between asking the clinicians directly (n=11), observing user behavior through clinical indicator and usage log data (n=14), and usability testing (n=4). The studies reported high scores from standard usability questionnaires, favorable surveys, and interview feedback. Improvements to underlying clinical indicators were observed in seven of nine studies, while two studies reported no significant changes to performance.
Conclusions:
This scoping review maps the current landscape of literature on dashboards based on routinely collected clinical indicator data. While there were common data visualization techniques and clinical indicators used across studies, there was diversity in the design of the dashboards and their evaluation. There was a lack of detail in design processes documented for reproducibility. We identified a lack of interface features to support clinicians to make sense of and reflect on their performance data for long-term professional learning.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.