Accepted for/Published in: JMIR Research Protocols
Date Submitted: Mar 7, 2023
Open Peer Review Period: Mar 7, 2023 - Apr 24, 2023
Date Accepted: May 31, 2023
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Risk of Bias Assessment for Diversity Groups in Community-Based Primary Health Care Artificial Intelligence Systems: A Rapid Review Protocol
ABSTRACT
Background:
Current literature identifies several potential benefits of artificial intelligence (AI) in community-based primary health care (CBPHC). However, there is a lack of understanding on how the risk of bias is considered in the development of CBPHC AI-based algorithms and to what extend they perpetuate or introduce potential biases toward groups that could be considered as vulnerable in function of their characteristics (e.g., age, sex, gender identity, sexual orientation, race, ethnicity, religion, physical ability, socioeconomic status (SES), etc.). To the best of our knowledge no reviews are currently available to identify the relevant methods to assess risk of bias in CBPHC algorithms. There is a lack of overview of mitigation and in which groups they are considered.
Objective:
To identify 1) relevant methods (e.g., frameworks, tools, checklists, etc.) to assess the risk of bias toward diversity groups in the development and/or deployment of algorithms in CBPHC and 2) mitigation interventions deployed to promote and increase equity, diversity and inclusion (EDI) in these algorithms.
Methods:
Rapid review of the literature in four databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. Two reviewers will independently screen the titles and abstracts and the full text of the identified records. We will include all studies on methods developed and/or tested to assess risk of bias in algorithms that can be relevant in CHPHC settings. Data extraction will use a validated extraction grid. We will present results using structured narrative summaries.
Results:
In November 2022 an information specialist developed a specific search strategy based on the main concepts of our primary review question in the most relevant databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. We completed the search strategy in December 2022 and 1022 sources were identified. We planned to start the screening in February and to complete the review by June 2023.
Conclusions:
This review will develop a comprehensive description of bias risk assessment used in CBPHC algorithms. This knowledge could be useful to researchers and other CBPHC stakeholders in order to identify potential sources of bias in algorithm development and eventually try to reduce or eliminate them. Clinical Trial: N/A
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.