Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Dec 12, 2024
Date Accepted: Oct 11, 2025

The final, peer-reviewed published version of this preprint can be found here:

Explainable AI Approaches in Federated Learning: Systematic Review

Tunduny T, Shibwabo B

Explainable AI Approaches in Federated Learning: Systematic Review

JMIR AI 2026;5:e69985

DOI: 10.2196/69985

PMID: 41632959

PMCID: 12914235

Explainable Artificial Intelligence (XAI) Approaches in Federated Learning (FL): A Systematic Review

  • Titus Tunduny; 
  • Bernard Shibwabo

ABSTRACT

Background:

Artificial Intelligence (AI) has in the recent past experienced a rebirth with the growth of generative AI systems such as ChatGPT and BARD. These systems are trained with billions of parameters and have enabled widespread accessibility and understanding of AI amongst different user groups. Widespread adoption of AI has led to the need for understanding how Machine Learning (ML) models operate to build trust in them. Understanding of how these models generate their results remains a huge challenge that Explainable AI seeks to solve. Federated learning (FL) grew out of the need to have privacy preserving AI by having machine learning models that are decentralized but still share model parameters with a global model.

Objective:

This study sought to examine the extent of development of the explainable AI field within the federated learning environment, in relation to the main contributions made, the types of federated learning, the sectors applied to, the models used, the methods applied by the research, and the databases sourced from.

Methods:

A systematic review on eight electronic databases namely, Web of Science Core Collection, Scopus, PubMed, ACM Digital Library, IEEE Xplore, Mendeley, BASE Search and Google Scholar was undertaken.

Results:

The review of 15 studies revealed that research in explainable federated learning was steadily growing, despite being concentrated in Europe and Asia. The key determiners of federated learning use were data privacy and limited training data. Horizontal federated learning remains the preferred approach for federated machine learning, while post-hoc explainability techniques were preferred.

Conclusions:

There is potential for development of novel approaches and improvement of existing approaches in the explainable federated learning field especially for critical areas.


 Citation

Please cite as:

Tunduny T, Shibwabo B

Explainable AI Approaches in Federated Learning: Systematic Review

JMIR AI 2026;5:e69985

DOI: 10.2196/69985

PMID: 41632959

PMCID: 12914235

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.