Currently submitted to: JMIR Preprints
Date Submitted: Mar 16, 2026
Open Peer Review Period: Mar 16, 2026 - Mar 1, 2027
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
A Framework for Trustworthy Healthcare AI: A Model-Type-Aware Minimum Evaluation and Reporting Standard
ABSTRACT
Background:
Artificial intelligence (AI) is rapidly reshaping healthcare by supporting earlier diagnosis, assisting clinical decision-making, and improving operational efficiency. However, most systems remain deployed within human-in-the-loop workflows, and hospitals lack a standardized framework to evaluate fairness, reliability, accuracy, and real-world safety. Prior failures illustrate how ambiguous objectives and unvalidated proxy targets can produce inequitable outcomes and erode clinical trust.
Objective:
This paper proposes a unified, model-type-aware minimum evaluation and reporting standard capable of assessing both traditional classification models and generative large language models (LLMs) via transparent reporting of performance markers, subgroup fairness analyses, and hallucination detection.
Methods:
We developed the framework by synthesizing recurring, documented failure modes of healthcare AI with widely used regulatory and risk-management concepts, iteratively mapping risks to concrete evidence artifacts that developers can produce, evaluators can audit, and purchasers can compare across vendors.
Results:
The resulting standard comprises three layers: universal disclosures applicable to all healthcare AI systems (U1–U5), minimum evaluation requirements for clinical ML models (C1–C6), and minimum evaluation requirements for LLM/RAG systems (G1–G6), supported by lifecycle governance expectations for post-deployment monitoring, versioning, and rollback.
Conclusions:
Current FDA pathways provide a foundation but remain insufficient for governing continual-learning systems and generative models in clinical workflows. We propose that the FDA extend these mechanisms to require mandatory disclosure of training data provenance and standardized benchmarks for clinical safety and relevance. Establishing such a framework is crucial to ensure that AI advances deliver autonomously safe and trustworthy healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.