Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 10, 2025
Open Peer Review Period: Mar 11, 2025 - May 6, 2025
Date Accepted: May 2, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Stakeholder Perspectives on Trustworthy Artificial Intelligence for Parkinson’s Disease Management Using a Co-creation Approach: A Qualitative Study
ABSTRACT
Background:
Parkinson’s Disease (PD) is the fastest-growing neurodegenerative disorder in the world, with prevalence expected to exceed 12 million by 2040, which poses significant healthcare and societal challenges. Artificial intelligence (AI) systems and wearable sensors hold the potential for PD diagnosis, personalized symptom monitoring, and progression prediction. Nonetheless, ethical AI adoption requires several core principles, including user trust, transparency, fairness, and human oversight.
Objective:
This study aimed to gather and analyze the perspectives of key stakeholders, including individuals with PD, healthcare professionals, AI technical experts, and bioethical experts, to inform the design of trustworthy AI-based digital solutions for PD diagnosis and management within the AI-PROGNOSIS European project.
Methods:
An exploratory qualitative approach, based on two datasets constructed from co-creation workshops, engaged key stakeholders with diverse expertise to gather insights, ensuring a broad range of perspectives and enriching the thematic analysis. A total of 23 participants participated in the co-creation workshops, including 11 people with PD, six healthcare professionals, three AI technical experts, one bioethics expert, and three facilitators. Using a semi-structured guide, key aspects of the discussion centered on trust, fairness, explainability, autonomy, and the psychological impact of AI in PD care.
Results:
Thematic analysis of the co-creation workshop transcripts identified five key main themes, each explored through various corresponding subthemes. AI Trust and Security (Theme 1) was highlighted, focusing on data safety and the accuracy and reliability of the AI systems. AI Transparency and Education (Theme 2) emphasized the need for educational initiatives and the importance of transparency and explainability of AI technologies. AI Bias (Theme 3) was identified as a critical theme, addressing issues of bias and fairness and ensuring equitable access to AI-driven healthcare solutions. Human Oversight (Theme 4) stressed the significance of AI-human collaboration and the essential role of human review in AI processes. Lastly, AI Psychological Impact (Theme 5) examined the emotional impact of AI on patients and how AI is perceived in the context of PD care.
Conclusions:
Our findings underline the importance of implementing robust security measures, developing transparent and explainable AI models, reinforcing bias mitigation and reduction strategies and equitable access to treatment, integrating human oversight, and considering the psychological impact of AI-assisted healthcare. These insights provide actionable guidance for developing trustworthy and effective AI-driven digital PD diagnosis and management solutions. Clinical Trial: na
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.