Accepted for/Published in: JMIR Serious Games
Date Submitted: Dec 11, 2020
Date Accepted: May 20, 2021
Date Submitted to PubMed: Aug 12, 2021
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Making and Evaluation of Digital Games Used for the Assessment of Attention: A Systematic Review
ABSTRACT
Background:
Serious games are now widely used in many contexts, including for psychological research and clinical use. One area of growing interest is that of cognitive assessment, which seeks to measure different cognitive functions like memory, attention and perception. Measuring these functions on both a population and individual level can inform research and indicate health issues. Attention is an important function to assess, as an accurate measure of attention can help diagnose many common disorders, such as attention-deficit/hyperactivity disorder and dementia. Using games to assess attention poses unique problems, as games inherently manipulate attention through elements like sound effects, graphics and rewards, and research on adding game elements to assessments (ie, gamification) has shown mixed results.
Objective:
The process for developing cognitive tasks is robust, with high psychometric standards that must be met before these tasks are used for assessment. Games offer more diverse approaches for assessment, but there is no standard for how they should be developed. In order to better understand the field, our objective was to answer the question: How are digital games used for the cognitive assessment of attention made and measured?
Methods:
After an initial database search that returned 44,172 papers, we conducted a systematic review of 62 papers that use a digital game to measure cognitive functions relating to attention.
Results:
Across the studies in our review, we found three approaches to making assessment games: gamifying cognitive tasks, creating custom games based on theories of cognition, and exploring potential assessment properties of commercial games. In regard to measuring the assessment properties of these games (eg, how accurately they assess attention), we found three approaches: comparison to a traditional cognitive task, comparison to a clinical diagnosis, and comparison to knowledge of cognition; however, the majority of studies in our review did not evaluate the game’s properties (eg, if participants enjoyed the game).
Conclusions:
Our review provides an overview of how games used for the assessment of attention are developed and evaluated. We further identified three barriers to advancing the field and recommend best practices to address these barriers. Our review can act as a resource to help guide the field towards the more standardized approaches and rigorous evaluation required for the widespread adoption of assessment games.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.