Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Mar 5, 2025
Date Accepted: Jun 2, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review

liu Y, Liu C, Zheng J, Xu C, Wang D

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review

J Med Internet Res 2025;27:e73374

DOI: 10.2196/73374

PMID: 40773743

PMCID: 12371287

Improving Explainability and Integrability of Medical Artificial Intelligence to promote healthcare professional acceptance and usage: A mixed systematic review

  • Yushu liu; 
  • Chenxi Liu; 
  • Jianing Zheng; 
  • Chang Xu; 
  • Dan Wang

ABSTRACT

Background:

The integration of artificial intelligence (AI) in healthcare holds significant potential, yet its acceptance by healthcare providers (HCPs) is essential for successful implementation. Understanding HCPs’ perspectives on the explainability and integrability of medical AI is crucial, as these factors influence their willingness to adopt and effectively use such technologies.

Objective:

To improve acceptance and use of medical artificial intelligence (AI), this study from a user perspective explores health care providers’ (HCPs) understanding of explainability and integrability of medical AI.

Methods:

A mixed systematic review was performed. We conducted a comprehensive search in PubMed, Web of Science, Scopus, IEEE Xplore, ACM Digital Library, and ArXiv for studies published between 2014 and 2024. Studies concerns explanation or integrability of medical AI were included. Study quality was assessed using the JBI Critical Appraisal Checklist and MMAT, with only medium or high-quality studies included. Qualitative data were analyzed via thematic analysis, while quantitative findings were synthesized narratively.

Results:

A total of 11,888 articles retrieved and 22 were included in the review. All studies were published from 2020 onward, with most conducted in developed countries (n = 18). The majority (n = 17) were qualitative, and 5 used quantitative or mixed methods. A conceptual framework of explainability and integrability of medical AI were identified from HCPs’ perspective. For explainability, HCPs mostly concerned post-processing explainability, especially local explainability (e.g., feature relevance), and its visual tools as determinants for acceptance and adoption. In terms of integrability, HCPs highlights workflow integration, namely, limited disturbance of existing workflows, system compatibility and ease-to-use, as main contributor to their adoption of such new technique.

Conclusions:

Future AI system designs should prioritize HCPs needs by enhancing explainability and integrability to promote acceptance and use among healthcare professionals.


 Citation

Please cite as:

liu Y, Liu C, Zheng J, Xu C, Wang D

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review

J Med Internet Res 2025;27:e73374

DOI: 10.2196/73374

PMID: 40773743

PMCID: 12371287

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.