Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jun 30, 2025
Date Accepted: Nov 25, 2025
The Ethics of Leveraging Routinely Collected Patient Data for AI Development: A mixed methods study
ABSTRACT
Background:
Routinely collected patient data offers great potential for medical research and the development of artificial intelligence (AI) tools. However, because this data is primarily gathered for clinical care rather than research, it often lacks the quality needed for AI training, raising both methodological and ethical concerns. While previous studies have reviewed the ethical implications of both routinely collected patient data and AI separately, their intersection, where AI is applied to such data, remains largely unexplored.
Objective:
This paper addresses the ethical challenges that arise at the intersection of routinely collected patient data and AI development, an area that has received limited attention despite its growing significance.
Methods:
This study used a mixed-methods approach, combining a scoping literature review with a systematic search and two stakeholder workshops conducted as part of the LEAPfROG project. The workshops followed the ‘Guidance Ethics Approach’ and focused on the ethical implications of using routinely collected patient data in AI-driven research, with a case study on an AI-based clinical decision-support system for detecting drug-induced acute kidney injury.
Results:
Findings from the literature and stakeholder discussions point to the risk of decontextualization when routinely collected patient data is reused for AI-driven research, which can result in harm and reduced clinical relevance. The concept of clinical tropism, identified in the literature, describes how AI systems may reinforce existing clinical norms rather than generate new insights. Stakeholders expressed related concerns about generic model outputs and emphasized the ethical importance of clearly defining and agreeing on the purpose of reuse data to foster trust among patients, clinicians, and researchers.
Conclusions:
We argue that responsible AI development requires explicit attention to how routinely collected patient data is interpreted, mobilized, and governed in practice. Rather than relying on top-down regulation or fixed ethical principles, we advocate for stakeholder-centered and stakeholder-led approaches. These approaches should involve those most affected by the specific use case at hand, including patients, clinicians, and data custodians, in shaping the purpose, risks, and benefits of data use for AI development.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.