Currently submitted to: JMIR Preprints
Date Submitted: Apr 22, 2026
Open Peer Review Period: Apr 22, 2026 - Jun 22, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Silent Risk: Security Vulnerabilities in AI-Enabled Pharmacovigilance Systems
ABSTRACT
The integration of large language models and agentic artificial intelligence (AI) into pharmacovigilance (PV) workflows introduces adversarial vulnerabilities that are not well addressed by conventional IT security or standard GxP validation frameworks. Unlike infrastructure breaches, attacks on model analytical judgment can alter seriousness assessments, suppress safety signals, or distort aggregate reports while case records remain intact and operational metrics appear normal. This paper uses structured threat modelling and war-gaming to map nine adversarial attack classes onto PV intake, signal detection, and regulatory submission workflows: prompt injection, data poisoning, adversarial text and multimodal manipulation, supply-chain compromise, model extraction, context-window manipulation, model inversion and membership inference, jailbreaking, and denial of service. Scenarios are grounded in adversarial machine learning research and assessed for structural plausibility rather than presented as confirmed PV incidents. The paper argues that PV faces distinctive systemic amplifiers: mandatory Individual Case Safety Reports submission and cross-MAH data exchange may propagate manipulated content across organizations; GxP validation of version-locked systems can delay remediation; and agentic architectures can convert local model failures into executed actions across safety databases, follow-up communication, and submission workflows. A three-tier defense architecture is proposed, spanning procedural controls, adversarial testing integrated into validation, deterministic enforcement approaches for agentic workflows, and frontier measures such as AI red teaming and federated anomaly detection.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.