Currently submitted to: Journal of Medical Internet Research
Date Submitted: Mar 17, 2026
Open Peer Review Period: Apr 16, 2026 - Jun 11, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
When Learning Cycles Turn Vicious: A Governance Model for AI-Enabled Learning Health Systems
ABSTRACT
The shared commitments trust framework codified by the National Academy of Medicine and the AI-enabled learning health system framework proposed by Ko et al. together provide the normative foundation and operational architecture that AI-driven clinical learning requires. Neither, however, specifies the governance mechanisms needed when AI-mediated learning produces harm rather than improvement. This paper identifies three governance gaps that expose AI-enabled learning health systems to compounding failure: the absence of operational controls that translate shared commitments into enforceable requirements, an accountability vacuum in which no designated actor bears responsibility at each stage of the AI learning lifecycle, and the lack of a failure detection mechanism capable of identifying when learning cycles become vicious rather than virtuous. To address these gaps, the paper proposes an integrated governance model comprising three interdependent layers. A control layer maps each shared commitment to auditable requirements with defined metrics and responsible actors. An accountability layer assigns explicit responsibility across five stages of the AI learning lifecycle with quantitative escalation triggers. A failure detection layer monitors six trust decay indicators and activates a circuit breaker mechanism when predefined thresholds are breached, enabling institutional intervention before harm compounds at machine speed. The model is offered as a practical complement to existing frameworks, providing health system leaders, policymakers, and researchers with the governance infrastructure required for safe and trustworthy AI-enabled learning at scale.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.