Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
Human-Centered Design to Address Biases in Artificial Intelligence
You Chen;
Ellen Wright Clayton;
Laurie Lovett Novak;
Shilo Anders;
Bradley Malin
ABSTRACT
Artificial intelligence (AI) promises to help health organizations deliver equitable care to their patients and optimize administrative processes. However, the complex lifecycle of AI can be biased in ways that exacerbate health disparities and inequities. As AI applications take on more central roles in biomedical research and healthcare, it is crucial to determine how best to maximize their benefits while minimizing their risks to patients and healthcare systems. One way to accomplish this is by involving a diverse group of stakeholders in the development and implementation of AI in healthcare. This perspective highlights the dual impact of AI on health disparities and inequalities, potential biases in each stage of AI design, development and deployment lifecycle, tools for identifying and mitigating these biases, and finally illustrates how human-centered AI (HCAI) can be applied to recognize and address the biases.
Citation
Please cite as:
Chen Y, Clayton EW, Novak LL, Anders S, Malin B
Human-Centered Design to Address Biases in Artificial Intelligence