Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Feb 4, 2025
Date Accepted: Nov 2, 2025

The final, peer-reviewed published version of this preprint can be found here:

Clinical Large Language Model Evaluation by Expert Review (CLEVER): Framework Development and Validation

Kocaman V, Kaya MA, Feier AM, TALBY D

Clinical Large Language Model Evaluation by Expert Review (CLEVER): Framework Development and Validation

JMIR AI 2025;4:e72153

DOI: 10.2196/72153

PMID: 41343765

PMCID: 12677871

CLEVER: Clinical Large Language Model Evaluation by Expert Review

  • Veysel Kocaman; 
  • Mustafa Aytuğ Kaya; 
  • Andrei Marian Feier; 
  • DAVID TALBY

ABSTRACT

Background:

The proliferation of both general-purpose and healthcare-specific Large Language Models (LLMs) has intensified the challenge of effectively evaluating and comparing them. Data contamination plagues the validity of public benchmarks; self-preference distorts LLM-as-a-judge approaches; and there’s a gap between the tasks used to test models and those used in clinical practice.

Objective:

In response, we propose CLEVER: A methodology for blind, randomized, preference-based evaluation by practicing medical doctors on specific tasks.

Methods:

We demonstrate the methodology by comparing GPT-4o against two healthcare-specific LLMs, with 8B and 70B parameters, over three tasks: clinical text summarization, clinical information extraction, and question answering on biomedical research.

Results:

Medical doctors prefer the Small Medical LLM over GPT-4o 45% to 92% more often on the dimensions of factuality, clinical relevance, and conciseness.

Conclusions:

The models show comparable performance on open-ended medical Q&A, suggesting that healthcare-specific LLMs can outperform much larger general-purpose LLMs in tasks that require understanding of clinical context. We test the validity of CLEVER evaluations by conducting inter-annotator agreement, inter-class correlation, and washout period analysis. Clinical Trial: n/a


 Citation

Please cite as:

Kocaman V, Kaya MA, Feier AM, TALBY D

Clinical Large Language Model Evaluation by Expert Review (CLEVER): Framework Development and Validation

JMIR AI 2025;4:e72153

DOI: 10.2196/72153

PMID: 41343765

PMCID: 12677871

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.