Accepted for/Published in: JMIR Formative Research
Date Submitted: Aug 15, 2025
Date Accepted: Nov 24, 2025
Date Submitted to PubMed: Dec 19, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Physician Evaluations of Large Language Model-Generated Responses to Medical Questions by Region and Years in Practice: A preliminary study
ABSTRACT
Background:
Large language models (LLMs) have demonstrated a unique ability to generate clinically accurate responses to patient questions, in some cases outperforming physicians. However, little is known about how physician evaluations of such responses vary globally and by years in clinical practice.
Objective:
This study builds on prior work by comparing LLM-generated and physician-authored responses to patient questions using two general-purpose LLMs in an international sample of physicians. Participants were asked to rank responses based on accuracy and responsiveness.
Methods:
We conducted a survey to assess physician preferences for AI- and human-generated responses to patient questions from the r/AskDocs subreddit. Participants reviewed anonymized answers from ChatGPT-4.0, Meta.AI, and a verified physician, ranking each from best (1) to worst (3). We summarized respondent characteristics descriptively. The primary outcome was the mean rank of each response type. Sensitivity analyses included pairwise win proportions and full rank distribution visualizations.
Results:
Fifty-two physicians completed the survey, most of whom were male (78.8%), aged 25–34 (53.8%), based in North America (48.1%) or Africa (25.0%), and over half (53.8%) had less than 5 years of clinical experience. Across all regions, ChatGPT-4.0 and Meta.AI responses were preferred over physician-authored responses, with ChatGPT-4.0 ranked highest in Africa, Asia, Asia Pacific, and North America, and Meta.AI slightly favored in Europe and the Americas. By years in practice, AI-generated responses consistently outperformed physician responses, with ChatGPT-4.0 most preferred among those with less than 15 years of experience and showing the greatest advantage in the 10–15 year group.
Conclusions:
In our global sample, most physicians preferred LLM-generated responses over those written by human contributors. However, preferences varied by geographic region and years in clinical practice, suggesting that both cultural and experiential factors shape physician attitudes toward Artificial Intelligence (AI). These preliminary findings highlight the need for larger, adequately powered studies to assess statistically significant differences and interactions across subgroups. Such research is essential to inform context-specific strategies for integrating AI into patient-facing communication. Clinical Trial: N/A
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.