Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Feb 25, 2023
Open Peer Review Period: Feb 25, 2023 - Apr 22, 2023
Date Accepted: Jul 31, 2023
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Assessing ChatGPT’s Capability for Multiple Choice Questions Using RaschOnline: Observational Study

Chow JC, Chien Tw, Chou W

Assessing ChatGPT’s Capability for Multiple Choice Questions Using RaschOnline: Observational Study

JMIR Form Res 2024;8:e46800

DOI: 10.2196/46800

PMID: 39115919

PMCID: 11346125

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Assessing ChatGPT's Capability for Multiple-Choice Questions using Rasch Analysis and Online Evaluation Tool: A Study on English Test Questions for Taiwan College Entrance Examinations

  • Julie Chi Chow; 
  • Tsair-wei Chien; 
  • Willy Chou

ABSTRACT

Background:

ChatGPT, a new large language model (LLM) developed by OpenAI, has demonstrated impressive performance in several specialized applications. In spite of the rising popularity and performance of artificial intelligence, there are a limited number of studies that evaluate ChatGPT’s capability for multiple choice questions (NCQs) using KIDMAP of Rasch analysis (i.e., an online tool called KIDMAP, which is used to evaluate the performance of ChatGPT in answering MCQs).

Objective:

Study objectives were to (1) demonstrate the use of online Rasch analysis (namely, RaschOnline) and (2) determine the ChatGPT’s grade compared to a normal sample.

Methods:

ChatGPT capability was evaluated using ten items from Taiwan college entrance examinations for the year 2023. Under a Rasch model, 300 virtual students with normal distributions were simulated and generated to compete with ChatGPR's responses. A total of five visual presentations were created using RaschOnline (e.g., item difficulties, differential item functioning (DIF), item characteristic curve, Wright map, and KIDMAP) to answer the research questions outlined in the objectives.

Results:

The results indicated that (1) the difficulty of the ten items monotonously increased from easier to harder (i.e., -2.43, -1.78, -1.48, -0.64, -0.1, 0.33, 0.59, 1.34, 1.7, and 2.47 logits); (2) there was evidence of DIF between gender groups for item 5(p=0.042); (3) item 5 fits the Rasch model rather well (p=0.61); (4) all of the items fit the Rasch model, as indicated by Infit mean square errors below the threshold of 1.5; (5) there was no significant difference in the measures obtained between gender groups(p=0.832); (6) a significant difference was observed among ability grades(P< 0.001); and (7)ChatGPT's capability is graded as A, surpassing grades B to E.

Conclusions:

With RaschOnline, we demonstrate that ChatGPT is capable of scoring a grade A compared to a normal sample, with an excellent ability to answer MCQs of English tests for the year 2023 on Taiwan college entrance examinations.


 Citation

Please cite as:

Chow JC, Chien Tw, Chou W

Assessing ChatGPT’s Capability for Multiple Choice Questions Using RaschOnline: Observational Study

JMIR Form Res 2024;8:e46800

DOI: 10.2196/46800

PMID: 39115919

PMCID: 11346125

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.