Accepted for/Published in: JMIR Medical Education
Date Submitted: Oct 3, 2025
Date Accepted: Jan 7, 2026
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Understanding Clinicians’ Informational Needs for AI-driven Clinical Decision Support Systems: Qualitative Interview Study
ABSTRACT
Background:
Advancements in Artificial Intelligence (AI) are transforming healthcare, particularly through AI-driven Clinical Decision Support Systems (AI-CDSS) that aid in predicting disease progression and personalizing treatment. Despite their potential, adoption remains limited due to clinician concerns about algorithm misuse, misinterpretation, and lack of transparency.
Objective:
This qualitative study explores the informational needs and preferences of clinicians to better understand and appropriately use AI-CDSS in decision-making. In parallel this study explores AI experts’ perspectives on what information should be communicated to enable safe and appropriate use of AI-CDSS.
Methods:
A qualitative description design study was conducted using semi-structured interviews with 16 participants (8 clinicians and 8 AI experts). Discussions focused on experiences with AI, informational needs, and feedback on existing reporting standards including Model Cards (Mitchell et al., 2019), Model Facts (Sendak et al., 2020), and the TRIPOD-AI checklist (Collins et al., 2015, 2024). The transcripts were analyzed through codebook thematic analysis.
Results:
Four key themes were identified: (i) Clinicians need clear information on training data, its origin, size, and inclusion/exclusion criteria, to judge model applicability; (ii) Performance metrics must go beyond AUC and be clinically relevant to support informed decisions; (iii) Limitations and warnings about inappropriate use should be specific and clearly communicated to prevent misuse; (iv) Information should be presented in layered, customizable formats within existing clinical software, avoiding unnecessary jargon and allowing optional deeper explanations. While each of the reviewed reporting standards offered strengths, none were considered sufficient alone. Participants recommended a combined and clinician-centered approach to information delivery. Alignment of reporting standards with clinical workflows and decision thresholds was thought to be crucial to bridge the current usability gap.
Conclusions:
To improve AI-CDSS adoption in clinical practice, reporting standards must be designed for better clinician comprehension and usability. Enhancing transparency, particularly regarding training data and performance, can likely help clinicians assess AI-CDSS more effectively. Information should be delivered in an accessible, layered format, fitting clinical workflows. Co-creation with clinicians throughout AI-CDSS development was a cross-cutting theme, highlighting its importance in ensuring tools are not only technically sound but also practically usable. Future research should explore how to structurally report on performance and validation metrics for clinician understanding and assess the impact of information provision on AI-CDSS adoption.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.