Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 5, 2025
Date Accepted: Jun 2, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Improving Explainability and Integrability of Medical Artificial Intelligence to promote healthcare professional acceptance and usage: A mixed systematic review
ABSTRACT
Background:
The integration of artificial intelligence (AI) in healthcare holds significant potential, yet its acceptance by healthcare providers (HCPs) is essential for successful implementation. Understanding HCPs’ perspectives on the explainability and integrability of medical AI is crucial, as these factors influence their willingness to adopt and effectively use such technologies.
Objective:
To improve acceptance and use of medical artificial intelligence (AI), this study from a user perspective explores health care providers’ (HCPs) understanding of explainability and integrability of medical AI.
Methods:
A mixed systematic review was performed. We conducted a comprehensive search in PubMed, Web of Science, Scopus, IEEE Xplore, ACM Digital Library, and ArXiv for studies published between 2014 and 2024. Studies concerns explanation or integrability of medical AI were included. Study quality was assessed using the JBI Critical Appraisal Checklist and MMAT, with only medium or high-quality studies included. Qualitative data were analyzed via thematic analysis, while quantitative findings were synthesized narratively.
Results:
A total of 11,888 articles retrieved and 22 were included in the review. All studies were published from 2020 onward, with most conducted in developed countries (n = 18). The majority (n = 17) were qualitative, and 5 used quantitative or mixed methods. A conceptual framework of explainability and integrability of medical AI were identified from HCPs’ perspective. For explainability, HCPs mostly concerned post-processing explainability, especially local explainability (e.g., feature relevance), and its visual tools as determinants for acceptance and adoption. In terms of integrability, HCPs highlights workflow integration, namely, limited disturbance of existing workflows, system compatibility and ease-to-use, as main contributor to their adoption of such new technique.
Conclusions:
Future AI system designs should prioritize HCPs needs by enhancing explainability and integrability to promote acceptance and use among healthcare professionals.
Citation