Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jul 15, 2025
Date Accepted: Apr 30, 2026
A Multi-Assessment and Multi-Professional Agents Approach for Medical Chatbot Risk Estimation: A Development and Evaluation Study
ABSTRACT
Background:
Assessing chatbot responses across three domains: medical, ethical, and legal, is an essential task in ensuring the safe use of AI in healthcare. While advancements in the use of LLMs show significant improvements in evaluating question-answer datasets through multiple-choice medical exams, existing systems utilize general LLMs without applying specialized domain knowledge, relying on standardized instructions without integrating real-world information and implementing ensemble methods such as majority voting failing to resolve the disagreement with other agents, resulting in misclassification and challenges in assessing risks.
Objective:
This study aims to design, develop, and evaluate a synergistic approach for assessing risks associated with chatbot responses using multi-assessment (MA) and multi-professional agents (MPA).
Methods:
We designed and developed an approach that consists of MA and MPA, specifically Initial Assessment (MA1), which internalizes three roles and provides an initial risk estimation; Final Assessment (MA3), which aims to reach a final consensus based on the previous assessments (MA1 and MA2), with each utilizing one LLM. Verification Assessment (MA2) incorporates a multi-professional agent or role-based LLM specialized agents for each risk domain (medical, ethical, legal). We evaluated the proposed approach using the MedNLP-Chat corpus (N=226; 100 train, 126 test), covering baseline, enhanced prompt, embedding-based search, and RAG. We used primary metrics, such as macro F1-score and joint accuracy, to evaluate system performance, along with CI and paired macro F1-score difference (Δ), as supporting metrics to assess the approach's effectiveness.
Results:
Compared with the baseline system, the MA-MPA framework integrated with RAG achieved a paired macro F1-score improvement of +0.037, achieving a macro F1-score of 0.800 across medical, ethical, and legal risk domains. Our MA approach improves the macro F1-score, particularly from the initial to the verification assessment (MA1→MA2), with gains ranging from +0.176 to +0.214 across systems. The MPA approach in verification assessment (MA2), using the RAG system, gains additional paired macro F1-score improvements over non-RAG systems, including improvements relative to the enhanced prompt (+0.054). In contrast, gains in joint accuracy relative to the baseline were not statistically significant, and gains relative to the enhanced prompt were small. Overall, RAG achieved a joint accuracy of 76 correct predictions across all risk domains out of 126 QA pairs (60.3%), with notable domain improvements in ethical (+0.252) and legal (+0.096) risk domains, while the medical domain showed slight increase (+0.070), indicating that the integration of the synergistic approach (MA and MPA) and external knowledge helps improve risk estimation primarily in the ethical and legal risk domains.
Conclusions:
A multi-assessment and multi-professional agent approach is an effective approach for assessing risk estimation in chatbot responses. These highlight the potential use of the approach and develop a specialized LLM for more robust and contextually grounded risk estimation.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.