Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 15, 2025
Date Accepted: Apr 30, 2026
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
A Multi-Assessment and Multi-Professional Agents Approach for Medical Chatbot Risk Estimation: A Development and Evaluation Study
ABSTRACT
Background:
Assessing chatbot responses across three domains: medical, ethical, and legal, is an essential task in ensuring the safe use of AI in healthcare. While advancements in the use of LLMs show significant improvements in evaluating question-answer datasets through multiple-choice medical exams, existing systems utilize general LLMs without applying specialized domain knowledge, relying on standardized instructions without integrating real-world information and implementing ensemble methods such as majority voting failing to resolve the disagreement with other agents, resulting in misclassification and challenges in assessing risks.
Objective:
This study aims to design, develop, and evaluate a synergistic approach for assessing risks associated with chatbot responses using multi-assessment and multi-professional agents.
Methods:
We designed and developed an approach that consists of a multi-assessment, multi-professional agent approach, specifically Initial Assessment (MA1), which internalizes three roles and provides an initial risk estimation; Final Assessment (MA3), which aims to reach a final consensus based on the previous assessments (MA1 and MA2), with each utilizing one LLM. Verification Assessment (MA2) incorporates a multi-professional agent for each risk domain (medical, ethical, legal). The proposed approach was evaluated using different systems: baseline, enhanced prompt, embedding-based search, and RAG, applying various metrics such as macro F1-score, joint accuracy, and delta (Δ).
Results:
The proposed approach demonstrates a significant improvement over existing systems in assessing the risk of chatbot responses in the ethical risk domain with a 0.25 increase and the legal risk domain with a 0.10 increase. It indicates that the proposed approach applied in systems with external knowledge helps improve risk estimation. However, the medical domain remains a challenge but shows slight improvements with a 0.07 increase.
Conclusions:
A multi-assessment and multi-professional agent approach is an effective approach for assessing risk estimation in chatbot responses. These highlight the potential use of the approach and develop a specialized LLM for more robust and contextually grounded risk estimation.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.