Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 27, 2023
Date Accepted: Sep 30, 2023
“You’re not allowed to fail with people”: A qualitative analysis of interviews with developers of machine learning for health care
ABSTRACT
Background:
Machine learning predictive analytics (MLPA) are increasingly utilized in health care to reduce costs and improve efficacy; they also have the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines addressing the challenges of evaluating the safety of ML-based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift towards process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about harms of MLPA developers themselves, whose role will be essential in overcoming the “principles-to-practice” gap.
Objective:
The objective of this study was to understand how MLPA developers of health care products perceived potential harms of those products and their responses to recognized harms.
Methods:
We interviewed 40 individuals who were developing MLPA tools for health care at 15 U.S.-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from 106 we previously identified. In interviews, we asked them about their perspectives on potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data.
Results:
We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and to the health care system. They also identified drivers of these harms related to characteristics of ML and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate harms. However, their recognition of their own responsibility to address potential harms varied widely.
Conclusions:
Even though MLPA developers recognized that their products can harm patients, the public, and even health systems, robust systems to assess and assure safety of these tools do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features inherent to health care and the business environment. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the “principles-to-practice” gap.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.