Currently submitted to: JMIR Human Factors
Date Submitted: Mar 17, 2026
Open Peer Review Period: Mar 31, 2026 - May 26, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Decision Support That Respects Human Judgment: Designing for Trust, Transparency, and Accountability in Imaging Quality Control
ABSTRACT
Background:
Imaging quality control (QC) is a safety‑critical task in computed tomography (CT) operations, requiring technologists to assess image artifacts, protocol adherence, and acquisition quality under time pressure. Artificial intelligence based decision support systems are increasingly introduced to assist these tasks; however, poorly designed systems can degrade trust, increase cognitive workload, and undermine human accountability. In regulated healthcare environments, decision support must not only improve performance but also preserve human judgment and responsibility.
Objective:
This study aimed to evaluate how human‑centered design features : specifically confidence bands, concise reason codes, and frictionless override controls : influence trust calibration, cognitive workload, and task performance when using AI‑assisted decision support for CT imaging quality control.
Methods:
A mixed‑methods field study was conducted at a single CT imaging site. Practicing CT technologists performed routine imaging QC tasks under two counterbalanced conditions: (1) a baseline workflow without AI assistance and (2) an AI‑assisted workflow using concept‑true decision support prototypes. The AI‑assisted interface presented recommendations with qualitative confidence bands, brief reason codes, and one‑click accept or override controls with audit logging. Quantitative measures included time‑on‑task, error and near‑miss rates, the System Usability Scale (SUS), and NASA Task Load Index (NASA‑TLX). Trust and reliance were assessed using Likert‑scale items and observed override behavior. Semi‑structured interviews were conducted to capture qualitative insights.
Results:
AI‑assisted decision support was associated with reduced time‑on‑task and lower error and near‑miss rates compared to the baseline workflow when transparent explanations and confidence indicators were provided. Cognitive workload, particularly mental and temporal demand, was lower in the AI‑assisted condition. Override behavior varied systematically with system confidence, indicating calibrated trust rather than blind reliance on automation. Qualitative findings highlighted the importance of visible reasoning, explicit uncertainty, and frictionless human control in supporting professional judgment.
Conclusions:
AI‑based decision support can improve CT imaging quality control without undermining human judgment when designed for trust, transparency, and accountability. Interfaces that make uncertainty explicit, explain recommendations, and preserve easy human override support calibrated reliance and reduced cognitive load. These findings offer practical human‑factors design guidance for deploying decision support in regulated healthcare operations Clinical Trial: Not applicable (non‑interventional field study).
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.