Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Feb 25, 2024
Date Accepted: May 4, 2024

The final, peer-reviewed published version of this preprint can be found here:

Data Set and Benchmark (MedGPTEval) to Evaluate Responses From Large Language Models in Medicine: Evaluation Development and Validation

Xu J, Lu L, Yang S, Liang B, Peng X, Pang J, Ding J, Shi X, Yang L, Song H, Li K, Sun X, Zhang S

Data Set and Benchmark (MedGPTEval) to Evaluate Responses From Large Language Models in Medicine: Evaluation Development and Validation

JMIR Med Inform 2024;12:e57674

DOI: 10.2196/57674

PMID: 38952020

PMCID: 11225096

MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large Language Models in Medicine

  • Jie Xu; 
  • Lu Lu; 
  • Sen Yang; 
  • Bilin Liang; 
  • Xinwei Peng; 
  • Jiali Pang; 
  • Jinru Ding; 
  • Xiaoming Shi; 
  • Lingrui Yang; 
  • Huan Song; 
  • Kang Li; 
  • Xin Sun; 
  • Shaoting Zhang

ABSTRACT

Background:

Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs’ responses create significant safety risks, potentially threatening patients’ physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation.

Objective:

We developed a comprehensive evaluation system, MedGPTEval, composed of criteria, medical datasets in Chinese, and publicly available benchmarks.

Methods:

First, a set of evaluation criteria was designed based on a comprehensive literature review. Second, existing candidate criteria were optimized for using a Delphi method by 5 experts in medicine and engineering. Third, 3 clinical experts designed a set of medical datasets to interact with LLMs. Finally, benchmarking experiments were conducted on the datasets. The responses generated by chatbots based on LLMs were recorded for blind evaluations by 5 licensed medical experts. The obtained evaluation criteria cover medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with 16 detailed indicators. The medical datasets include 27 medical dialogues and 7 case reports in Chinese. Three chatbots were evaluated: ChatGPT, by OpenAI; ERNIE Bot, by Baidu, Inc.; and Doctor PuJiang (Dr. PJ), by Shanghai Artificial Intelligence Laboratory.

Results:

Dr. PJ outperformed ChatGPT and ERNIE Bot in the multiple-turn medical dialogues and case report scenarios. Dr. PJ also outperformed ChatGPT in the semantic consistency rate and complete error rate category, indicating better robustness. However, Dr. PJ had slightly lower scores in medical professional capabilities compared with ChatGPT in the multiple-turn dialogue scenario.

Conclusions:

MedGPTEval provides comprehensive criteria to evaluate chatbots by LLMs in the medical domain, open-source datasets, and benchmarks assessing 3 LLMs. Experimental results demonstrate that Dr. PJ outperforms ChatGPT and ERNIE Bot in social and professional contexts. Therefore, such an assessment system can be easily adopted by researchers in this community to augment an open-source dataset.


 Citation

Please cite as:

Xu J, Lu L, Yang S, Liang B, Peng X, Pang J, Ding J, Shi X, Yang L, Song H, Li K, Sun X, Zhang S

Data Set and Benchmark (MedGPTEval) to Evaluate Responses From Large Language Models in Medicine: Evaluation Development and Validation

JMIR Med Inform 2024;12:e57674

DOI: 10.2196/57674

PMID: 38952020

PMCID: 11225096

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.