Accepted for/Published in: Journal of Participatory Medicine
Date Submitted: Dec 2, 2024
Date Accepted: Jun 2, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Patient Perspective on Artificial Intelligence in Healthcare: Insights for Diagnostic Communication and Tool Implementation
ABSTRACT
Background:
Artificial intelligence (AI) Is rapidly transforming healthcare, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication.
Objective:
To examine patient perspectives on AI use in healthcare, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools.
Methods:
A co-design focus group workshop was conducted with 17 participants (patients and family members) aged 18-80. The session included interactive activities, discussions, and guideline development exploring five AI scenarios: (1) Patient Portal Messaging, (2) Radiological Imaging, (3) Ambient Digital Scribe, (4) Virtual Human Telehealth Call, (5) Clinical Decision Support for HIV Testing. Thematic analysis was used to analyze transcripts and facilitator notes
Results:
Participants reported varying comfort levels with AI applications, with higher comfort for AI tools with less direct patient interaction, such as ambient digital scribes and radiology image readers, and lower comfort for those with more direct interaction, such as virtual human telehealth calls. Five key themes regarding patient perspectives of AI emerged: (1) Concerns Around Model Development and Validation, (2) Concerns Around AI Systems for Patients and Providers, (3) Expectations Around Disclosure of AI Usage, (4) Excitement and Opportunities for AI to Better Address Patient Needs, (5) Patient Concerns Around Data Protection, Privacy, and Security. Participants emphasized the importance of transparency in AI development validation, preferred AI as a supplementary tool rather than a replacement for human clinicians and stressed the need for clear communication about AI’s role in their care. They also highlighted the potential for AI to enhance patient understanding and engagement while expressing concerns about data security and privacy.
Conclusions:
This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in healthcare. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, healthcare organizations, and policymakers to ensure responsible and patient-centered AI deployment in healthcare.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.