Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Feb 9, 2025
Date Accepted: Aug 21, 2025
Using LLMs to assess the consistency of randomized controlled trials on AI interventions with CONSORT-AI: a cross-sectional survey
ABSTRACT
Background:
Chatbots based on large language models (LLMs) have shown promise in evaluating the consistency of research. Previously, researchers used LLM to assess if randomized controlled trial (RCT) abstracts adhered to the CONSORT-Abstract guidelines. However, the consistency of artificial intelligence (AI) interventional RCTs align with the CONSORT-AI standards by LLMs remains unclear.
Objective:
The aim of this study is to identify the consistency of randomized controlled trials on AI interventions with CONSORT-AI using chatbots based on LLMs.
Methods:
This cross-sectional study employed six LLM models to assess the consistency of RCTs on AI interventions. The sample selection is based on articles published in JAMA Network Open, which included a total of 41 RCTs. All queries were submitted to LLMs through an API interface with a temperature setting of 0 to ensure deterministic responses. One researcher posed the questions to each model, while another independently verified the responses for validity before recording the results. The Overall Consistency Score (OCS), recall, inter-rater reliability and consistency of contents were analyzed.
Results:
We found gpt-4-0125-preview has the best average OCS (86.5%, 95%CI: 82.5%-90.5% and 81.6%, 95% CI: 77.6%-85.6%), followed by gpt-4-1106-preview(80.3%, 95%CI: 76.3%-84.3% and 78.0%, 95% CI: 74.0%-82.0%). The model with the worst average OCS is gpt-3.5-turbo-0125 (61.9%, 95%CI: 57.9%-65.9% and 63.0%, 95% CI: 59.0%-67.0%). Among the 11 unique items of CONSORT-AI, Item 2 (“State the inclusion and exclusion criteria at the level of the input data”) received the poorest overall evaluation across six models, with an average OCS of 48.8%. For other items, those with an average OCS greater than 80% across the six models included Items 1, 5, 8, and 9.
Conclusions:
GPT-4 variants demonstrate strong performance in assessing the consistency of RCTs with CONSORT-AI. Nonetheless, refining the prompts could enhance the precision and consistency of the outcomes. While AI tools like GPT-4 variants are valuable, they are not yet fully autonomous in addressing complex and nuanced tasks such as adherence to CONSORT-AI standards. Therefore, integrating AI with higher levels of human supervision and expertise will be crucial to ensuring more reliable and efficient evaluations, ultimately advancing the quality of medical research. Clinical Trial: None.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.