Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Aug 4, 2023
Date Accepted: Nov 20, 2023
Date Submitted to PubMed: Nov 27, 2023

The final, peer-reviewed published version of this preprint can be found here:

Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study

Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG

Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study

J Med Internet Res 2023;25:e51580

DOI: 10.2196/51580

PMID: 38009003

PMCID: 10784979

Evaluation of Generative Artificial Intelligence Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-based Dentistry: A Comparative Mixed-Methods Study

  • Kostis Giannakopoulos; 
  • Argyro Kavadella; 
  • Anas Aaqel Salim; 
  • Vassilis Stamatopoulos; 
  • Eleftherios G Kaklamanos

ABSTRACT

Background:

The increasing application of Generative Artificial Intelligence Large Language Models (LLMs) in various fields including Dentistry raises questions about their accuracy.

Objective:

This study aimed to comparatively evaluate the answers provided by four LLMs - Google's Bard, OpenAI's ChatGPT-3.5 and ChatGPT-4, and Microsoft's Bing to clinically relevant questions from the field of dentistry.

Methods:

The LLMs were queried with 20 open type, clinical dentistry-related questions in different disciplines, developed by the respective faculty of the School of Dentistry, European University Cyprus. The LLMs’ answers were graded in a range from 0 (minimum) to 10 (maximum) points, against strong, traditionally collected scientific evidence such as guidelines and consensus statements, using a rubric, as if they were exam questions posed to students, by two experienced faculty members. The scores were compared statistically to identify the best-performing model using Friedman’s and Wilcoxon’s tests. Moreover, evaluators were asked to provide qualitative evaluation for comprehensiveness, scientific accuracy, clarity and relevance.

Results:

Overall, no statistically significant difference between the scores given by the two evaluators were detected, thus an average score for every LLM was computed. While ChatGPT-4 statistically outperformed ChatGPT-3.5 (P-value=.008), Microsoft Bing Chat (P-value=.049) and Google Bard (P-value=.045), all models exhibited occasional inaccuracies, generality, outdated content, and a lack of source references. The evaluators noted instances where LLMs delivered irrelevant information, vague answers, or information that was not fully accurate.

Conclusions:

The study demonstrates that while LLMs hold promising potential as an aid in the implementation of evidence-based dentistry, their current limitations can lead to potentially harmful healthcare decisions if not used judiciously. Therefore, these tools should not replace the dentist's critical thinking and in-depth understanding of the subject matter. Further research, clinical validation, and model improvements are necessary for these tools to be fully integrated into dental practice. Dental practitioners must be aware of the LLMs' limitations, as imprudent use could potentially impact patient care. Regulatory measures should be established to oversee the use of these evolving technologies.


 Citation

Please cite as:

Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG

Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study

J Med Internet Res 2023;25:e51580

DOI: 10.2196/51580

PMID: 38009003

PMCID: 10784979

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.