Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR AI

Date Submitted: Mar 19, 2026
Open Peer Review Period: Mar 27, 2026 - May 22, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

What Kind of Claims Are Transparency, Explainability, and Interpretability?: A Definitional Taxonomy for Health AI

  • Tessa Ringer

ABSTRACT

The terms transparency, explainability, and interpretability are ubiquitous in the health AI literature yet remain poorly and inconsistently defined, with fewer than 20% of papers using them offering meaningful definitions. This paper argues that the imprecision matters — these are normative concepts, not mere technical descriptors, and their under-theorisation leaves the field vulnerable to regulatory frameworks imposed by those who understand neither the science nor the stakes. Drawing on W.B. Gallie's concept of essential contestability, and on pragmatic models of agreement from Rawls, Lukes, and Mouffe, the paper proposes that convergence on fixed definitions is neither likely nor necessary; what is needed instead is clarity about what kind of claim each term represents. The paper advances a tripartite taxonomy. Transparency-claims are structural: they concern the accessibility and visibility of a system's internals, independent of whether any observer understands what they see. Explainability-claims are relational: they concern the successful epistemic mediation between system output and a cognitively situated human observer, and are irreducibly dependent on the properties of that observer. Interpretability-claims are functional: they concern cognitive simulability, demanding that a human observer can mentally re-derive the model's reasoning, achieving epistemic closure rather than mere comprehension. Each category is developed through a survey of competing theoretical approaches — from post-hoc additive frameworks and counterfactual explanations to concept bottleneck models and Rashomon set analysis — and defended against plausible objections that would collapse the distinctions. The paper does not offer definitions of the three concepts but rather a meta-framework for classifying the epistemic work each performs, with the aim of enabling more disciplined and productive disagreement about their content.


 Citation

Please cite as:

Ringer T

What Kind of Claims Are Transparency, Explainability, and Interpretability?: A Definitional Taxonomy for Health AI

JMIR Preprints. 19/03/2026:95742

DOI: 10.2196/preprints.95742

URL: https://preprints.jmir.org/preprint/95742

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.