Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
Cross-Sectional Evaluation of Medical Disinformation Safeguards in Consumer-Facing Large Language Model Platforms
Natansh D Modi;
Cyril A Alex;
Abdulhalim A Awaty;
Bradley D Menz;
Stephen M Bacchi;
Kacper T Gradon;
Jessica M Logan;
Andrew Rowland;
Lisa Kalisch Ellet;
Ross A McKinnon;
Michael D Wiese;
Michael J Sorich;
Ashley M Hopkins
ABSTRACT
Consumer-facing LLM platforms are increasingly used for health information, but their safeguards can be probed to generate persuasive health disinformation, especially via indirect, narrative-style prompts. In a 90-prompt audit across six topics and six platforms (Nov 27–30, 2025), ChatGPT and Claude produced no disinformation, while Copilot, Meta AI, Grok and Gemini showed substantial vulnerabilities under obfuscation, highlighting the need for continuous evaluation.
Citation
Please cite as:
Modi ND, Alex CA, Awaty AA, Menz BD, Bacchi SM, Gradon KT, Logan JM, Rowland A, Kalisch Ellet L, McKinnon RA, Wiese MD, Sorich MJ, Hopkins AM
Cross-Sectional Evaluation of Medical Disinformation Safeguards in Consumer-Facing Large Language Model Platforms