Accepted for/Published in: JMIR Public Health and Surveillance
Date Submitted: Sep 23, 2021
Date Accepted: Nov 30, 2021
Date Submitted to PubMed: Apr 18, 2022
Interactive versus static decision support tools for COVID-19: An experimental comparison
ABSTRACT
Background:
During the COVID-19 pandemic, medical laypersons with symptoms indicative of a COVID-19 infection commonly seek guidance on whether and where to seek medical care. Numerous web-based decision support tools (DSTs) have been developed, both by public and commercial stakeholders, to assist their decision-making. Though most of the DST’s underlying algorithms are similar and simple decision trees, their mode of presentation differs: some DSTs present a static flowchart, while others are designed as a conversational agent, guiding the user through the decision tree’s node step-by-step in an interactive manner.
Objective:
To investigate whether interactive DSTs provide greater decision support than non-interactive (ie, static) flowcharts.
Methods:
We developed mock interfaces for two DST (one static, one interactive), mimicking patient-facing, freely available DSTs for COVID-19 related self-assessment. Their underlying algorithm was identical and based on the Center for Disease Control’s guidelines. We recruited adult US residents online. which participants. Participants appraised the appropriate social and care-seeking behavior for seven fictitious descriptions of patients (case vignettes). Participants in the experimental groups received either the static or interactive mock DST as support, while the control group appraised the case vignettes unsupported. We determined participants’ accuracy, decision certainty (after deciding) and mental effort to measure quality of decision support. Participants’ ratings of the DSTs’ usefulness, ease of use, trust and future intention to use the tools served as measure to analyze differences in participants’ perception of the tools. We used ANOVAs and t-tests to assess statistical significance.
Results:
Our survey yielded 196 responses. The mean number of correct assessments was higher in the experimental groups (interactive DST group: M=11.71, SD=2.37; static DST group: M=11.45, SD=2.48) than in the control group (M=10.17, SD=2.00; F(2,193)=8.6, p<.001). Decisional certainty was significantly higher in the experimental groups (interactive DST group: M=80.7%, SD=14.1%; static DST group: M=80.5%, SD=15.8%) compared to the control group (M=65.8%, SD=20.8%; F(2, 193)=15.7, p<.001). Differences for mental effort between the three study were non-significant. Effect sizes of differences between the two experimental groups were small and non-significant for all three measures of quality of decision support and most measures of users’ perception of the DSTs.
Conclusions:
When the decision space is limited as is the case in common COVID-19 self-assessment DSTs, static flowcharts might prove as beneficial in enhancing decision quality as interactive tools. Given that static flowcharts reveal the underlying decision algorithm more transparently and require less effort to develop, they might prove more efficient in providing guidance to the public. Further research should validate our findings on different use cases, elaborate on the trade-off between transparency and convenience in DSTs, and investigate whether subgroups of users benefit more one type of user interface than the other.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.