Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Jul 2, 2025
Date Accepted: Dec 11, 2025

The final, peer-reviewed published version of this preprint can be found here:

A Pragmatic Framework for Federated Learning Risk and Governance in Academic Medical Centers

Bottomly D, Barnes B, Mavuwa K, Lee N, Roth H, Chen C, McWeeney S

A Pragmatic Framework for Federated Learning Risk and Governance in Academic Medical Centers

JMIR AI 2026;5:e80022

DOI: 10.2196/80022

PMID: 41810822

PMCID: 12977002

A Pragmatic Framework for Federated Learning Risk and Governance in Academic Medical Centers

  • Daniel Bottomly; 
  • Bridget Barnes; 
  • Kuli Mavuwa; 
  • Nikki Lee; 
  • Holger Roth; 
  • Chester Chen; 
  • Shannon McWeeney

ABSTRACT

With the rapid development of artificial intelligence (AI), particularly large language models, there is growing interest in adopting AI approaches within academic medical centers (AMCs). However, the vast amounts of data required for AI and the sensitive nature of medical information pose significant challenges to developing high-performing models at individual institutions. Furthermore, recent changes in government funding priorities may result in the decentralization of biomedical data repositories that risk creating significant barriers to effective data sharing and robust model development. This has generated significant interest in federated learning (FL), which enables collaboratively model training without transferring data between institutions, thereby enhancing the protection of proprietary and sensitive information. Nonetheless, the complexity of FL introduces additional risks, as the distributed nature of training can expose new vulnerabilities related to both the data and the models themselves. These risks often present uncharted territory for AI governance, security, and privacy leadership. Using the common federated learning framework NVIDIA FLARE, we examine the inherent risks associated with platform-defined roles, privacy and security configurations, and identify key artifacts essential for a comprehensive risk-management approach. We introduce a risk framework for FL designed to assist AMC leaders in security, privacy, IT, and AI governance in effectively managing these emerging challenges. Finally, we propose platform-agnostic guidance for AI governance committees focused on model certification and evaluation.


 Citation

Please cite as:

Bottomly D, Barnes B, Mavuwa K, Lee N, Roth H, Chen C, McWeeney S

A Pragmatic Framework for Federated Learning Risk and Governance in Academic Medical Centers

JMIR AI 2026;5:e80022

DOI: 10.2196/80022

PMID: 41810822

PMCID: 12977002

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.