Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jul 22, 2022
Date Accepted: Apr 20, 2023
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
How Fair is Artificial Intelligence in Healthcare? – An Online Survey
ABSTRACT
Background:
Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development was addressed in past studies, little is known on active measures undertaken within current AI developments.
Objective:
This study’s objective was to survey AI specialists in healthcare to investigate developers’ perception of bias in AI algorithms for healthcare applications.
Methods:
An online survey was provided in both German and English language comprising a maximum of 41 questions using branching logic within the REDCap® web application. Only results of participants with experience in the field of medical AI applications were included for analysis. Demographic data, technical expertise and perception of fairness as well as knowledge in biases in AI were analyzed and variations among gender, age and work environment assessed.
Results:
A total of 151 AI specialists completed the online survey. The median age was 30 years (IQR 26-39) and 67% of respondents were male. Five percent never heard of biases in AI before, one third rated their development as fair (31%, 47/151) or moderately fair (34%, 51/151). Twelve percent (18/151) reported their AI to be barely fair and 1% (2/151) not fair at all. Reasons for biases were lack of fair data (68%, 90/132), guidelines, recommendations (49%, 65/132) or knowledge (45%, 60/132). We found a significant difference among participants regarding bias perception and work environment (p=0.020): 5% of respondents working in industry compared to 25% of respondents working clinically rated their AI developments as not fair at all or barely fair.
Conclusions:
This study highlights that knowledge and guidelines regarding preventive measures for biases as well as generating fair data with the help of the FAIR principles must be further perpetuated for establishing fair AI healthcare applications. The difference of fairness perception among AI developers from industry and clinical environment needs to be further investigated.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.