Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Balancing Hope and Harm: A Qualitative Exploration of Ethical Aspects of using AI in Parkinson’s Disease
ABSTRACT
Background:
As Parkinson’s disease (PD) rates increase, so does interest in finding new technological solutions for PD management. Despite substantial efforts to explore potential applications of AI in PD management, research from people with PD (PwP)’s perspectives on AI remains limited.
Objective:
To explore ethical considerations of AI in PD management from a PwP perspective.
Methods:
A qualitative triangulation of 13 interviews and two focus groups with an expert panel of PwP from six European countries was carried out using abductive thematic analysis. The six biomedical ethical principles conceptualized by Beauchamp and Childress guided the analysis. Participants varied in diagnosis, disease experiences and technological backgrounds. A researcher with PD was involved from start to finish, providing valuable insights to data collection and analysis.
Results:
While optimistic that AI could enhance autonomy and beneficence through personalized, actionable insights for PwP and their healthcare professionals, concerns arose over patient involvement, model accuracy and privacy, ethical injustices, and the psychological impact. Risk prediction, prognosis, and medication response were viewed differently in terms of potential value and ethical considerations, with risk prediction being perceived as the most ethically complex. To uphold autonomy, it was considered important for AI insights to be patient-accessible, and that sensitive insights should be communicated by a healthcare professional who recognizes individual differences in desiring and responding to AI predictions.
Conclusions:
While PwP felt AI could personalize (self-)care and increase autonomy, concerns of psychological harm and widening inequalities highlight the importance of ethical safeguards. Our findings underscore the importance of AI integrations that prioritize individual needs, actively involving PwP in the development, implementation, and interpretation of predictive AI, and guidelines to assist healthcare professionals and avoid patient harm. Different forms of implementation and precautions should be taken for risk, progression, and medication response prediction.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.