Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 25, 2023
Date Accepted: Apr 26, 2024

The final, peer-reviewed published version of this preprint can be found here:

Trust but Verify: Lessons Learned for the Application of AI to Case-Based Clinical Decision-Making From Postmarketing Drug Safety Assessment at the US Food and Drug Administration

Ball R, Talal AH, Dang O, Muñoz M, Markatou M

Trust but Verify: Lessons Learned for the Application of AI to Case-Based Clinical Decision-Making From Postmarketing Drug Safety Assessment at the US Food and Drug Administration

J Med Internet Res 2024;26:e50274

DOI: 10.2196/50274

PMID: 38842929

PMCID: 11190620

Trust but Verify: Lessons learned for application of artificial intelligence to case-based clinical decision making from post-marketing drug safety assessment at the US Food and Drug Administration

  • Robert Ball; 
  • Andrew H. Talal; 
  • Oanh Dang; 
  • Monica Muñoz; 
  • Marianthi Markatou

ABSTRACT

Adverse drug reactions (ADR) are a common cause of morbidity in health care. The US Food and Drug Administration (FDA) evaluates reports of adverse events (AEs) after submission to the FDA Adverse Event Reporting System (FAERS) as part of its surveillance activities. Over the past decade, FDA has explored the application of artificial intelligence (AI) to evaluate these reports to improve the efficiency and scientific rigor of the process. A gap remains between AI algorithm development and deployment. We apply Diffusion of Innovations theory to help explain why certain algorithms for evaluating AEs at FDA were accepted by safety reviewers and others were not. Two key lessons stand out. First, the trustworthiness of the AI algorithm is the main determinant of its acceptance by human experts. Second, the process by which clinicians decide from case reports whether a drug is likely to cause an adverse event is not well defined beyond general principles. This makes the development of high performing, transparent, and explainable AI algorithms challenging, leading to a lack of trust by the safety reviewers. Even accounting for the introduction of large language models, the pharmacovigilance community needs an improved understanding of causal inference and of the cognitive framework for determining the causal relationship between a drug and an adverse event. We describe specific future research directions that underpin facilitating implementation and trust in AI for drug safety applications, including improved methods for measuring and controlling of algorithmic uncertainty, computational reproducibility, and clear articulation of a cognitive framework for causal inference in case-based reasoning.


 Citation

Please cite as:

Ball R, Talal AH, Dang O, Muñoz M, Markatou M

Trust but Verify: Lessons Learned for the Application of AI to Case-Based Clinical Decision-Making From Postmarketing Drug Safety Assessment at the US Food and Drug Administration

J Med Internet Res 2024;26:e50274

DOI: 10.2196/50274

PMID: 38842929

PMCID: 11190620

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.