Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 24, 2025
Open Peer Review Period: Jun 24, 2025 - Aug 19, 2025
Date Accepted: Dec 29, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers: Focus Group Study

Fantus S, Li J, Wang T, Tang L

Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers: Focus Group Study

J Med Internet Res 2026;28:e79613

DOI: 10.2196/79613

PMID: 41604668

PMCID: 12895146

From Awareness to Action: Exploring Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers

  • Sophia Fantus; 
  • Jinxu Li; 
  • Tianci Wang; 
  • Lu Tang

ABSTRACT

Background:

As artificial intelligence (AI) becomes increasingly embedded in clinical decision-making and preventive care, it is urgent to address ethical concerns such as bias, privacy, and transparency to protect clinician and patient populations. Although prior research has examined the perspectives of medical AI stakeholders, including clinicians, patients, and health system leaders, far less is known about how medical AI developers and researchers understand and engage with ethical challenges as they develop AI tools. This gap is consequential because developers’ ethical awareness, decision-making, and institutional environments influence how AI tools are conceptualized and deployed in practice. Thus, it is essential to understand how developers perceive these issues and what supports they identify as necessary for ethical AI development.

Objective:

The objectives of the study were twofold: (1) to examine medical AI developers’ and researchers’ knowledge, attitudes, and experiences with AI ethics; and (2) to identify recommendations to enhance and strengthen interpersonal and institutional ethics-focused training and support.

Methods:

We conducted two semi-structured focus groups (60-90 minutes each) in 2024 with 13 AI developers and researchers affiliated with five U.S.-based academic institutions. Participants’ work spanned a wide variety of medical AI applications, including Alzheimer’s disease prediction, clinical imaging, electronic health records analysis, digital health, counseling and behavioral health, and genotype–phenotype modeling. Focus groups were conducted via Microsoft Teams, recorded and transcribed verbatim. We applied conventional qualitative content analysis to inductively identify emerging concepts, categories, and themes. Coding was performed independently by three researchers, with consensus reached through iterative team meetings.

Results:

The analysis identified four key themes. (1) AI ethics knowledge acquisition: Participants reported learning about ethics informally through peer-reviewed literature, reviewer feedback, social media, and mentorship rather than through structured training. (2) Ethical encounters: Participants described recurring ethical challenges related to data bias, patient privacy, generative AI use, commercialization pressures, and a tendency for research environments to prioritize model accuracy over ethical reflection. (3) Reflections on ethical implications: Participants expressed concern about downstream effects on patient care and clinician autonomy, and model generalizability, noting that rapid technological innovation outpaces regulatory and evaluative processes. (4) Strategies to mitigate ethical concerns: Recommendations included clearer institutional guidelines, ethics checklists, interdisciplinary collaboration, multi-institutional data sharing, enhanced IRB support, and the inclusion of bioethicists as members of the AI research team.

Conclusions:

Medical AI developers and researchers recognize significant ethical challenges in their work but lack structured training, resources, and institutional mechanisms to address them. Findings of this study underscore the need for Institutions to consider embedding ethics into research processes through practical tools, mentorship, and interdisciplinary partnerships. Strengthening these supports is essential to preparing the next generation of developers to design and deploy ethical AI in healthcare.


 Citation

Please cite as:

Fantus S, Li J, Wang T, Tang L

Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers: Focus Group Study

J Med Internet Res 2026;28:e79613

DOI: 10.2196/79613

PMID: 41604668

PMCID: 12895146

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.