Accepted for/Published in: JMIR Formative Research
Date Submitted: Dec 7, 2025
Date Accepted: Mar 4, 2026
Date Submitted to PubMed: Mar 5, 2026
Public Perceptions of Artificial Intelligence in Medicine and Implications for Future Medical Education: A Cross-Sectional Survey
ABSTRACT
Background:
The integration of artificial intelligence (AI) into clinical practice is contingent on public trust. This trust often depends on physician oversight, yet a significant gap exists between the need for AI-competent physicians and the current state of medical education. While the perspectives of students and experts on this gap are known, the This study aimed to assess US public perceptions regarding AI in medicine and the corresponding, emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician over-reliance on AI, support for mandatory AI education, and priorities for the future focus of medical training.views of the US general public remain largely unquantified.
Objective:
This study aimed to assess US public perceptions regarding AI in medicine and the corresponding, emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician over-reliance on AI, support for mandatory AI education, and priorities for the future focus of medical training.
Methods:
We conducted a cross-sectional, web-based survey of US adults in November 2025. Participants (N=524) were recruited via SurveyMonkey Audience. We calculated descriptive statistics, frequencies (n), proportions (%), and 95% confidence intervals (CIs) for all main survey items.
Results:
A total of 524 participants completed the survey. A majority (62.8%, 329/524; 95% CI 58.6%–66.9%) placed the most trust in a physician's diagnosis based on their expertise alone; only 7.8% (41/524; 95% CI 5.5%–10.1%) trusted an AI-first diagnostic model. Trust was highly contingent on training: 93.9% (492/524) of participants rated formal physician training on AI limitations as "Essential" or "Very important." Widespread concern about physician over-reliance on AI was reported, with 81.1% (425/524) being "Very" or "Extremely concerned." Consequently, 85.2% (446/524) agreed or strongly agreed that training on AI use, ethics, and limitations should be mandatory in medical school. When asked about future educational priorities, 70.2% (368/524; 95% CI 66.3%–74.1%) believed medical education should focus on human-centered skills (eg, empathy, communication) over clinical skills.
Conclusions:
The US public expresses conditional trust in medical AI, strongly preferring physician-led and critically supervised models. These findings reveal a clear public mandate for medical education reform. The public expects future physicians to be mandatorily trained to appraise AI, understand its limitations, and refocus their professional development on the human-centered skills that technology cannot replace. Clinical Trial: -
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.