Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Nov 9, 2024
Date Accepted: Mar 25, 2025

The final, peer-reviewed published version of this preprint can be found here:

The Advanced Reasoning Capabilities of Large Language Models for Detecting Contraindicated Options in Medical Exams

Yano Y, Ohashi M, Miyagami T, Mori H, Nishizaki Y, Daida H, Naito T

The Advanced Reasoning Capabilities of Large Language Models for Detecting Contraindicated Options in Medical Exams

JMIR Med Inform 2025;13:e68527

DOI: 10.2196/68527

PMID: 40354629

PMCID: 12088613

Large Language Model's Advanced Reasoning Capabilities in Detecting Contraindications in Medical Exams

  • Yuichiro Yano; 
  • Mizuki Ohashi; 
  • Taiju Miyagami; 
  • Hirotake Mori; 
  • Yuji Nishizaki; 
  • Hiroyuki Daida; 
  • Toshio Naito

ABSTRACT

Background:

In medical practice, improving clinical reasoning and reducing diagnostic errors are essential. OpenAI introduced "OpenAI-o1" with enhanced capabilities for complex reasoning; however, it remains uncertain whether OpenAI-o1 can decrease diagnostic errors compared to the current model, GPT-4.

Objective:

We hypothesize that OpenAI-o1, compared to GPT-4, demonstrates greater proficiency in avoiding contraindicated options during the Japanese National Medical Licensing Examination (JNMLE), where candidates must avoid selecting any contraindicated options to pass.

Methods:

This study utilized questions from the JNMLE ranging from 2019 to 2024, specifically selecting those that included contraindications as potential answers. We administered 15 text-based questions to both GPT-4 and Open AI-o1 as follows. Step 1: Each question was first submitted to either GPT-4 or Open AI-o1 in Japanese. The model was tasked with identifying the correct answer. Step 2: The same question was presented to the model in Japanese. The model was instructed to select the contraindication instead of the correct answer. Step 3: Steps 1 and 2 were repeated with the questions translated into English.

Results:

GPT-4 correctly answered 12 out of 15 questions (80%) and identified 11 contraindications (73%) in Japanese. In English, GPT-4 correctly answered 13 questions (87%) and identified 11 contraindications (73%). Conversely, Open AI-o1 correctly answered 15 questions (100%), and identified 13 contraindications in Japanese (87%).

Conclusions:

Compared to GPT-4, OpenAI-o1 demonstrated superior accuracy and the selection of contraindicated options in the JNMLE, particularly when using English. Further research is needed to explore whether these models can contribute to reducing medical diagnostic errors.


 Citation

Please cite as:

Yano Y, Ohashi M, Miyagami T, Mori H, Nishizaki Y, Daida H, Naito T

The Advanced Reasoning Capabilities of Large Language Models for Detecting Contraindicated Options in Medical Exams

JMIR Med Inform 2025;13:e68527

DOI: 10.2196/68527

PMID: 40354629

PMCID: 12088613

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.