Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jan 13, 2025
Date Accepted: Apr 21, 2025

The final, peer-reviewed published version of this preprint can be found here:

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

Goisauf M, Cano Abadía M, Akyüz K, Bobowicz M, Buyx A, Colussi I, Fritzsche MC, Lekadir K, Marttinen P, Mayrhofer MT, Meszaros J

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

J Med Internet Res 2025;27:e71236

DOI: 10.2196/71236

PMID: 40455564

PMCID: 12171647

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

  • Melanie Goisauf; 
  • Mónica Cano Abadía; 
  • Kaya Akyüz; 
  • Maciej Bobowicz; 
  • Alena Buyx; 
  • Ilaria Colussi; 
  • Marie-Christine Fritzsche; 
  • Karim Lekadir; 
  • Pekka Marttinen; 
  • Michaela Th Mayrhofer; 
  • Janos Meszaros

ABSTRACT

Trustworthy AI has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles—such as fairness, robustness, and explainability—as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in healthcare. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical concepts related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI lifecycle emerged as crucial, supporting a human- and multi-centered framework for trustworthy AI implementation. Findings emphasize that trustworthiness in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end-users, contributing to more effective and equitable AI-mediated healthcare. Findings highlight that building AI trustworthiness in healthcare requires a human-centered, multi-stakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.


 Citation

Please cite as:

Goisauf M, Cano Abadía M, Akyüz K, Bobowicz M, Buyx A, Colussi I, Fritzsche MC, Lekadir K, Marttinen P, Mayrhofer MT, Meszaros J

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

J Med Internet Res 2025;27:e71236

DOI: 10.2196/71236

PMID: 40455564

PMCID: 12171647

Per the author's request the PDF is not available.