Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Sep 27, 2023
Date Accepted: Feb 21, 2024

The final, peer-reviewed published version of this preprint can be found here:

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis

Chelli M, Descamps J, Lavoué V, Trojani C, Azar M, Deckert M, Raynier JL, Clowez G, Boileau P, Ruetsch-Chelli C

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis

J Med Internet Res 2024;26:e53164

DOI: 10.2196/53164

PMID: 38776130

PMCID: 11153973

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

High Rates of Hallucinated References with ChatGPT

  • Mikaël Chelli; 
  • Jules Descamps; 
  • Vincent Lavoué; 
  • Christophe Trojani; 
  • Michel Azar; 
  • Marcel Deckert; 
  • Jean-Luc Raynier; 
  • Gilles Clowez; 
  • Pascal Boileau; 
  • Caroline Ruetsch-Chelli

ABSTRACT

Background:

Large Language Models (LLMs) have raised both interest and concern in the academic community. They offer potential for automating literature search and synthesis for systematic reviews but concerns regarding their reliability and the tendency to generate unsupported ('hallucinated') content persist.

Objective:

To assess the performance of Large Language Models like ChatGPT and Bard to produce references in the context of scientific writing.

Methods:

The performance of ChatGPT and Bard in replicating the results of human-conducted systematic reviews was assessed. Using systematic reviews pertaining to shoulder rotator cuff pathology, these LLMs were tested by providing the same inclusion criteria and comparing the results with original systematic review references. The study utilized three key performance metrics: Recall, Precision, and F1-score, alongside the hallucination rate. Articles were considered “hallucinated” if any two of the following information were wrong: title, first author, or year of publication.

Results:

The LLMs could generate legitimate references, but also produced hallucinated articles at a rate between 28% to 91%. Although ChatGPT 4 demonstrated superior performance among the models tested, all failed significantly in adhering to the established eligibility criteria. Precision and recall ranged from 0% to 13.4% and 0% to 14.7% respectively, highlighting the limitations of these models in replicating human-conducted systematic reviews.

Conclusions:

Given their current performance, it is not recommended for LLMs to be deployed as the primary or exclusive tool for conducting systematic reviews. Any references generated by such models warrant thorough validation by researchers. The high occurrence of hallucinations in LLMs highlights the necessity for refining their training and functionality before confidently employing them for rigorous academic purposes


 Citation

Please cite as:

Chelli M, Descamps J, Lavoué V, Trojani C, Azar M, Deckert M, Raynier JL, Clowez G, Boileau P, Ruetsch-Chelli C

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis

J Med Internet Res 2024;26:e53164

DOI: 10.2196/53164

PMID: 38776130

PMCID: 11153973

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.