Currently submitted to: JMIR AI
Date Submitted: Mar 19, 2026
Open Peer Review Period: Mar 30, 2026 - May 25, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Confidential Zero-Trust Framework for Securing Generative AI in Healthcare on Google Cloud: An Architectural Blueprint
ABSTRACT
Background:
The integration of Generative Artificial Intelligence (GenAI) in healthcare is impeded by significant security challenges unaddressed by traditional frameworks, precisely the “data-in-use” gap where sensitive patient data and proprietary AI models are exposed during active processing.
Objective:
To propose the Confidential Zero-Trust Framework (CZF), a novel security paradigm designed to address the data-in-use gap for GenAI healthcare workloads.
Methods:
We analyzed the healthcare threat landscape, regulatory requirements (such as HIPAA and GDPR), and the failure modes of traditional security architectures. Based on this analysis, we developed a multi-tiered architectural blueprint that synergistically combines Zero-Trust Architecture for granular access control with the hardware-enforced data isolation of Confidential Computing.
Results:
We detailed a blueprint for implementing the CZF on Google Cloud. The CZF provides a defense-in-depth architecture where data remains encrypted while in-use within a hardware-based Trusted Execution Environment (TEE). The framework’s use of remote attestation offers cryptographic proof of workload integrity, transforming compliance into a verifiable technical fact and enabling secure, multi-party collaborations previously blocked by security and intellectual property concerns.
Conclusions:
By closing the data-in-use gap and enforcing Zero-Trust principles, the CZF provides a robust and verifiable framework that establishes the necessary foundation of trust to enable the responsible adoption of transformative AI technologies in healthcare. Clinical Trial: n/a
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.