Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Jul 18, 2025
Open Peer Review Period: Jul 21, 2025 - Sep 15, 2025
Date Accepted: Feb 7, 2026
Date Submitted to PubMed: Feb 10, 2026
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

AI in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability

Wadie P, Zakher B, Elgazzar K, Alsbakhi A, Alhejaily AMG

AI in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability

JMIR AI 2026;5:e80928

DOI: 10.2196/80928

PMID: 41665551

Artificial Intelligence in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability

  • Peter Wadie; 
  • Bishoy Zakher; 
  • Khalid Elgazzar; 
  • Abdulhamid Alsbakhi; 
  • Abdul-Mohsen G. Alhejaily

ABSTRACT

Background:

Artificial intelligence (AI) integrated with point-of-care (POC) imaging has emerged as a promising approach to expand access to diagnostic capabilities in settings with limited specialist access. However, no systematic review has comprehensively evaluated AI-assisted clinical decision support across multiple POC imaging modalities, assessed explainability implementation, or quantified clinical impact evidence gaps.

Objective:

To systematically identify, evaluate, and synthesize evidence on AI-based clinical decision support systems utilizing point-of-care imaging for diagnostic purposes, with particular attention to task-shifting potential, explainability implementation, and clinical outcome evidence.

Methods:

We searched PubMed, Scopus, IEEE Xplore, and Web of Science (January 2018 to November 2025). We included research studies evaluating AI/machine learning systems applied to POC-capable imaging modalities in POC clinical settings with clinical decision support outputs. Two reviewers independently screened studies, extracted data across 15 domains, and assessed methodological quality using the QUADAS-2 tool. We developed novel frameworks to evaluate the implementation of explainability (XAI cascade) and clinical impact evidence (impact pyramid). Narrative synthesis was performed due to substantial heterogeneity in the data.

Results:

Of 2,113 records identified, 20 studies met inclusion criteria, encompassing approximately 78,296 patients across 15 countries. Studies evaluated tuberculosis (n=5), breast cancer (n=3), deep vein thrombosis (n=2), and nine other conditions using ultrasound (n=7, 35%), chest X-ray (n=5, 25%), photography-based and colposcopic imaging (n=3, 15%), fundus photography (n=2, 10%), microscopy (n=2, 10%), and dermoscopy (n=1, 5%). Median sensitivity was 92% (IQR 85.7-98.0%), and median specificity was 90.6% (IQR 70.0-95.7%). Task-shifting was demonstrated in 65% of studies (n=13), with non-specialists achieving specialist-level performance after a median of 1 hour of training. The XAI implementation cascade revealed critical gaps: 15 of 20 studies (75%) did not mention explainability, 2 studies (10%) provided explanations to users, and no studies (0%) evaluated whether clinicians understood the explanations or whether XAI influenced their decisions. The clinical impact pyramid showed 3 studies (15%) reported technical accuracy only, 13 studies (65%) reported process outcomes, 4 studies (20%) documented clinical actions, and no studies (0%) measured patient outcomes. Methodological quality was concerning, as 14 of 20 studies (70%) were at high or very high risk of bias, with verification bias (n=14, 70%) and selection bias (n=10, 50%) being the most common. The overall certainty of evidence was very low (GRADE ⊕◯◯◯), primarily due to risk of bias, heterogeneity, and imprecision.

Conclusions:

AI-assisted POC imaging demonstrates promising diagnostic accuracy and enables meaningful task-shifting with remarkably minimal training requirements. However, the field exhibits critical evidence gaps, including a complete absence of patient outcome measurement, inadequate evaluation of explainability, regulatory misalignment, and a lack of cross-context validation despite claims of global applicability. Gaps must be addressed through implementation research with patient outcome endpoints, rigorous XAI evaluation, and multi-context validation studies before widespread clinical adoption. Review limitations include restrictions to English-language publications, exclusion of grey literature, and substantial heterogeneity that precludes meta-analysis. Clinical Trial: This review was not registered.


 Citation

Please cite as:

Wadie P, Zakher B, Elgazzar K, Alsbakhi A, Alhejaily AMG

AI in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability

JMIR AI 2026;5:e80928

DOI: 10.2196/80928

PMID: 41665551

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.