Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Feb 25, 2024
Date Accepted: May 4, 2024
MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large Language Models in Medicine
ABSTRACT
Background:
Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs’ responses create significant safety risks, potentially threatening patients’ physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation.
Objective:
We developed a comprehensive evaluation system, MedGPTEval, composed of criteria, medical datasets in Chinese, and publicly available benchmarks.
Methods:
First, a set of evaluation criteria was designed based on a comprehensive literature review. Second, existing candidate criteria were optimized for using a Delphi method by 5 experts in medicine and engineering. Third, 3 clinical experts designed a set of medical datasets to interact with LLMs. Finally, benchmarking experiments were conducted on the datasets. The responses generated by chatbots based on LLMs were recorded for blind evaluations by 5 licensed medical experts. The obtained evaluation criteria cover medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with 16 detailed indicators. The medical datasets include 27 medical dialogues and 7 case reports in Chinese. Three chatbots were evaluated: ChatGPT, by OpenAI; ERNIE Bot, by Baidu, Inc.; and Doctor PuJiang (Dr. PJ), by Shanghai Artificial Intelligence Laboratory.
Results:
Dr. PJ outperformed ChatGPT and ERNIE Bot in the multiple-turn medical dialogues and case report scenarios. Dr. PJ also outperformed ChatGPT in the semantic consistency rate and complete error rate category, indicating better robustness. However, Dr. PJ had slightly lower scores in medical professional capabilities compared with ChatGPT in the multiple-turn dialogue scenario.
Conclusions:
MedGPTEval provides comprehensive criteria to evaluate chatbots by LLMs in the medical domain, open-source datasets, and benchmarks assessing 3 LLMs. Experimental results demonstrate that Dr. PJ outperforms ChatGPT and ERNIE Bot in social and professional contexts. Therefore, such an assessment system can be easily adopted by researchers in this community to augment an open-source dataset.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.