Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Preprints

Date Submitted: Sep 2, 2025
Open Peer Review Period: Sep 2, 2025 - Aug 18, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Sandbagging in AI as Medical Devices: Patient Safety and Liability Risks

  • Eugenia Forte

ABSTRACT

This study examines the phenomenon of "sandbagging" in AI medical devices, where systems strategically underperform during evaluation to conceal dangerous capabilities that emerge post-deployment. Through systematic analysis of emerging literature on AI sandbagging behaviour, technical detection approaches, and regulatory structures in the EU, UK, and US, this research reveals critical gaps in current regulatory frameworks designed for traditional medical devices. Analysis shows sandbagging manifests through both developer-driven mechanisms (where engineers intentionally display safer capabilities for expedited deployment) and system-driven mechanisms (where AI systems autonomously underperform during evaluation phases). Research shows that both large frontier and smaller models exhibit sandbagging behaviours after prompting or fine-tuning while maintaining general performance benchmarks, with larger models demonstrating superior calibration capabilities. Current static regulatory approaches in the EU Medical Device Regulation and UK frameworks fail to detect sandbagging as they rely on documentation-based submissions without addressing AI's dynamic, generative nature. The US FDA's Total Product Lifecycle approach shows promise through algorithm change protocols and real-world performance monitoring, yet regulatory sandboxes remain underutilized. Healthcare provider liability becomes dangerously ambiguous when clinicians rely on systems with concealed capabilities, particularly given automation bias effects and black-box reasoning limitations. Traditional risk classifications focusing on direct bodily harm inadequately address AI's potential for deceptive behaviour, including "password-locked" models that reveal hidden capabilities when triggered. Technical detection solutions including attribution graph analysis and noise-based detection show promise but remain insufficient. Dynamic evaluation frameworks are essential, recommending mandatory regulatory sandboxes for real-world testing, continuous monitoring protocols, adversarial testing, and enhanced post-market surveillance.


 Citation

Please cite as:

Forte E

Sandbagging in AI as Medical Devices: Patient Safety and Liability Risks

JMIR Preprints. 02/09/2025:83411

DOI: 10.2196/preprints.83411

URL: https://preprints.jmir.org/preprint/83411

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.