Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 24, 2023
Open Peer Review Period: Mar 23, 2023 - May 18, 2023
Date Accepted: Nov 20, 2023
(closed for review but you can still tweet)
Security Implications of Artificial Intelligence Chatbots in Healthcare
ABSTRACT
Artificial intelligence (AI) chatbots like ChatGPT are computer programs that use artificial intelligence and natural language processing to understand customer questions and generate natural, fluid, dialogue-like responses to its inputs. ChatGPT, an AI chatbot created by OpenAI, has rapidly become a widely used tool on the internet. AI chatbots have the potential to improve patient care and public health. But massive amounts of people’s data are required to train and improve most AI models. Such increased chatbot usages introduce data security issues that should be handled yet understudied. The purpose of this article is to highlight the security problems of AI chatbots and propose guidelines to help protect personal health information. This article explores the impact of using ChatGPT in healthcare. It identifies the security risks of ChatGPT. It proposes some security safeguards to mitigate these risks. It concludes by discussing the policy implications of using AI chatbots in healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.