Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Research Protocols

Date Submitted: Mar 7, 2023
Open Peer Review Period: Mar 7, 2023 - Apr 24, 2023
Date Accepted: May 31, 2023
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review

Sasseville M, Ouellet S, Rhéaume C, Couture V, Després P, Paquette JS, Gentelet K, Darmon D, Bergeron F, Gagnon MP

Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review

JMIR Res Protoc 2023;12:e46684

DOI: 10.2196/46684

PMID: 37358896

PMCID: 10337340

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Risk of Bias Assessment for Diversity Groups in Community-Based Primary Health Care Artificial Intelligence Systems: A Rapid Review Protocol

  • Maxime Sasseville; 
  • Steven Ouellet; 
  • Caroline Rhéaume; 
  • Vincent Couture; 
  • Philippe Després; 
  • Jean-Sébastien Paquette; 
  • Karine Gentelet; 
  • David Darmon; 
  • Frédéric Bergeron; 
  • Marie-Pierre Gagnon

ABSTRACT

Background:

Current literature identifies several potential benefits of artificial intelligence (AI) in community-based primary health care (CBPHC). However, there is a lack of understanding on how the risk of bias is considered in the development of CBPHC AI-based algorithms and to what extend they perpetuate or introduce potential biases toward groups that could be considered as vulnerable in function of their characteristics (e.g., age, sex, gender identity, sexual orientation, race, ethnicity, religion, physical ability, socioeconomic status (SES), etc.). To the best of our knowledge no reviews are currently available to identify the relevant methods to assess risk of bias in CBPHC algorithms. There is a lack of overview of mitigation and in which groups they are considered.

Objective:

To identify 1) relevant methods (e.g., frameworks, tools, checklists, etc.) to assess the risk of bias toward diversity groups in the development and/or deployment of algorithms in CBPHC and 2) mitigation interventions deployed to promote and increase equity, diversity and inclusion (EDI) in these algorithms.

Methods:

Rapid review of the literature in four databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. Two reviewers will independently screen the titles and abstracts and the full text of the identified records. We will include all studies on methods developed and/or tested to assess risk of bias in algorithms that can be relevant in CHPHC settings. Data extraction will use a validated extraction grid. We will present results using structured narrative summaries.

Results:

In November 2022 an information specialist developed a specific search strategy based on the main concepts of our primary review question in the most relevant databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. We completed the search strategy in December 2022 and 1022 sources were identified. We planned to start the screening in February and to complete the review by June 2023.

Conclusions:

This review will develop a comprehensive description of bias risk assessment used in CBPHC algorithms. This knowledge could be useful to researchers and other CBPHC stakeholders in order to identify potential sources of bias in algorithm development and eventually try to reduce or eliminate them. Clinical Trial: N/A


 Citation

Please cite as:

Sasseville M, Ouellet S, Rhéaume C, Couture V, Després P, Paquette JS, Gentelet K, Darmon D, Bergeron F, Gagnon MP

Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review

JMIR Res Protoc 2023;12:e46684

DOI: 10.2196/46684

PMID: 37358896

PMCID: 10337340

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.