Accepted for/Published in: JMIR AI
Date Submitted: Sep 18, 2025
Open Peer Review Period: Sep 18, 2025 - Nov 13, 2025
Date Accepted: Apr 17, 2026
(closed for review but you can still tweet)
Public expectations for FDA approval of AI-Clinical Decision Support tools: Quantitative study
ABSTRACT
Background:
Regulation of artificial intelligence (AI) has been slow relative to the pace of its integration into healthcare. Several AI diagnostic tools for diabetic retinopathy (DR) have already received FDA clearance making it a timely and concrete example for exploring public perspectives on regulatory approval. The scope of FDA regulation of AI tools is currently being explored, and public attitudes about regulatory oversight should inform these discussions and is explored in this paper. Prior research suggests that comfort, trust and political orientation shape views on government regulation and emerging technologies, potentially affecting support for oversight of AI in healthcare.
Objective:
This study assessed the perceived importance of FDA approval for AI-supported Clinical Decision Support (AI-CDS) tools, with DR as the use case. We explored how comfort with AI tool developers, trust in data sharing, political affiliation, and demographic characteristics relate to the importance of FDA approval among U.S. adults.
Methods:
A national survey was conducted in 2023 using the NORC AmeriSpeak Panel, a probability-based sample including 1,787 respondents, with a subset of 982 participants answering questions about a use case describing the use of an AI tool for identifying DR. Participants rated the importance of FDA approval for such tools on a 4-point Likert scale, with responses dichotomized between high and low perceived importance. Logistic regression models assessed associations between this outcome and predictors including comfort with AI tool developers, trust in data sharing, political affiliation, and demographic characteristics.
Results:
Among the 982 respondents presented with the DR use case, 67% indicated that FDA approval was "fairly" or "very" important. Statistically significant factors associated with the outcome (“It is important that the AI tool is approved by the FDA”) included: higher comfort with using the tool (OR=1.44, P=.006), comfort with developers from private companies (OR=1.38, P=.008) and hospitals (OR=1.60, P <.001). Trust in responsible data sharing (OR=1.25, P= .013), and higher education (OR =1.64, P = .043) also predicted higher support. Lean/Strong Republicans (OR=0.43, P <.001) and Independents (OR=0.63, P =.033) were less likely to view FDA approval as important, as were Black (OR=0.50, P <.001) and Hispanics (OR=0.57, P=.007) respondents compared to white respondents.
Conclusions:
This study offers insights into public attitudes regarding FDA oversight of AI-CDS tools. Findings highlight how comfort, trust, and lower confidence from marginalized communities and some political groups shape perceived importance of FDA approval, offering a point for broader applications in healthcare AI governance. These factors should be better considered as health systems work to ensure trustworthy implementation of new AI technologies.
Citation
Per the author's request the PDF is not available.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.