Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Dec 18, 2023
Date Accepted: Mar 22, 2024

The final, peer-reviewed published version of this preprint can be found here:

Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care

Wang S, Mo C, Chen Y, Dai X, Wang H, Shen X

Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care

JMIR Med Educ 2024;10:e55595

DOI: 10.2196/55595

PMID: 38693697

PMCID: 11067446

Exploring the Performance of ChatGPT-4 in Taiwan Audiologist Qualification Examination: Indicating the Potential of AI Chatbots in Hearing Care

  • Shangqiguo Wang; 
  • Changgeng Mo; 
  • Yuan Chen; 
  • Xiaolu Dai; 
  • Huiyi Wang; 
  • Xiaoli Shen

ABSTRACT

Background:

AI chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medical fields, including education, clinical practice, and research.

Objective:

This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan hearing specialist qualification examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services.

Methods:

ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan hearing specialist qualification examination. The examination encompassed six subjects: 1) Basic Auditory Science, 2) Behavioral Audiology, 3) Electrophysiological Audiology, 4) Principles and Practice of Hearing Devices, 5) Health and Rehabilitation of the Auditory and Balance Systems, and 6) Auditory and Speech Communication Disorders (including Professional Ethics). Each subject included 50 multiple-choice questions, with the exception of Behavioral Audiology, which had 49 questions, amounting to a total of 299 questions.

Results:

The accuracy rates across the six subjects were as follows: Basic Auditory Science (88%), Behavioral Audiology (63%), Electrophysiological Audiology (58%), Principles and Practice of Auditory Aids (72%), Health and Rehabilitation of the Auditory and Balance Systems (80%), and Auditory and Speech Communication Disorders, including Professional Ethics (86%). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination's passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4's responses indicated that incorrect answers were predominantly due to information errors.

Conclusions:

ChatGPT-4 demonstrated a robust performance in the audiologist qualification examination, showcasing effective logical reasoning skills. The results suggest that with enhanced information accuracy, ChatGPT-4’s performance could be further improved. This study indicates the significant potential of AI chatbots in the application within audiology and hearing care services.


 Citation

Please cite as:

Wang S, Mo C, Chen Y, Dai X, Wang H, Shen X

Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care

JMIR Med Educ 2024;10:e55595

DOI: 10.2196/55595

PMID: 38693697

PMCID: 11067446

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.