Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Jan 26, 2024
Date Accepted: Oct 7, 2024

The final, peer-reviewed published version of this preprint can be found here:

Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain

Ros-Arlanzón P, Perez-Sempere A

Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain

JMIR Med Educ 2024;10:e56762

DOI: 10.2196/56762

PMID: 39622707

PMCID: 11611784

Evaluating AI Competence in Specialized Medicine: A Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist exam in Spain

  • Pablo Ros-Arlanzón; 
  • Angel Perez-Sempere

ABSTRACT

Background:

With the rapid advancement of artificial intelligence (AI) in various fields, evaluating its application in specialized medical contexts becomes crucial. ChatGPT, a large language model developed by OpenAI, has shown potential in diverse applications, including medicine.

Objective:

This study aims to compare the performance of ChatGPT with that of attending neurologists in a real neurology specialist examination conducted in the Valencian Community, Spain, to assess the AI's capabilities and limitations in medical knowledge.

Methods:

We conducted a comparative analysis using the 2022 neurology specialist exam results from 120 neurologists and responses generated by ChatGPT versions 3.5 and 4. The exam consisted of 80 multiple-choice questions, with a focus on clinical neurology and health legislation. Questions were classified according to Bloom's Taxonomy. Statistical analysis of performance, including Kappa coefficient for response consistency, was performed.

Results:

Human participants exhibited a median score of 5.91, with 32 neurologists failing to pass. ChatGPT-3.5 ranked 116th out of 122, answering 54.5% of questions correctly (score 3.94). ChatGPT-4 showed marked improvement, ranking 17th with 81.8% of correct answers (score 7.57), surpassing several human specialists. No significant variations were observed in the performance on lower-order versus higher-order questions. Additionally, ChatGPT-4 demonstrated increased inter-rater reliability, as reflected by a higher Kappa coefficient of 0.73, compared to ChatGPT-3.5's coefficient of 0.69.

Conclusions:

This study underscores the evolving capabilities of AI in medical knowledge assessment, particularly in specialized fields. ChatGPT4's performance, surpassing the median human score in a rigorous neurology exam, marks a notable advancement, suggesting its potential as an effective tool in specialized medical education and assessment.


 Citation

Please cite as:

Ros-Arlanzón P, Perez-Sempere A

Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain

JMIR Med Educ 2024;10:e56762

DOI: 10.2196/56762

PMID: 39622707

PMCID: 11611784

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.