Accepted for/Published in: JMIR Formative Research
Date Submitted: Mar 16, 2025
Open Peer Review Period: Mar 16, 2025 - May 11, 2025
Date Accepted: Sep 24, 2025
(closed for review but you can still tweet)
Use first, Trust later? Exploring How Healthcare Providers View the Gaps Between AI’s Regulation and its Implementation
ABSTRACT
As Artificial Intelligence (AI) transforms healthcare, aligning implementation with evolving management strategies is critical. However, limited research explores the link between the specific nature of AI regulation in healthcare and managing its deployment. FDA and EC regulatory frameworks typically focus on pre-market approval and validation yet largely fail to address the need for continuous monitoring and re-validation of AI models post-marketing. As AI models are exposed to new data in clinical settings, their performance may degrade or alter over time, necessitating ongoing oversight.This often means that healthcare providers must step into the regulatory uncertainty zone to develop local protocols for quality assurance and recalibration. This study was conducted to explore how the specific nature of guidelines for AI in healthcare creates an experimental space where healthcare managers and expert-users (radiologists and other physicians) engage in configuring a usable framework for AI implementation during an innovative, early adoption phase.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.