Explainable Artificial Intelligence (XAI) Approaches in Federated Learning (FL): A Systematic Review
ABSTRACT
Background:
Artificial Intelligence (AI) has in the recent past experienced a rebirth with the growth of generative AI systems such as ChatGPT and BARD. These systems are trained with billions of parameters and have enabled widespread accessibility and understanding of AI amongst different user groups. Widespread adoption of AI has led to the need for understanding how Machine Learning (ML) models operate to build trust in them. Understanding of how these models generate their results remains a huge challenge that Explainable AI seeks to solve. Federated learning (FL) grew out of the need to have privacy preserving AI by having machine learning models that are decentralized but still share model parameters with a global model.
Objective:
This study sought to examine the extent of development of the explainable AI field within the federated learning environment, in relation to the main contributions made, the types of federated learning, the sectors applied to, the models used, the methods applied by the research, and the databases sourced from.
Methods:
A systematic review on eight electronic databases namely, Web of Science Core Collection, Scopus, PubMed, ACM Digital Library, IEEE Xplore, Mendeley, BASE Search and Google Scholar was undertaken.
Results:
The review of 15 studies revealed that research in explainable federated learning was steadily growing, despite being concentrated in Europe and Asia. The key determiners of federated learning use were data privacy and limited training data. Horizontal federated learning remains the preferred approach for federated machine learning, while post-hoc explainability techniques were preferred.
Conclusions:
There is potential for development of novel approaches and improvement of existing approaches in the explainable federated learning field especially for critical areas.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.