Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR AI

Date Submitted: Apr 16, 2026
Open Peer Review Period: Apr 28, 2026 - Jun 23, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

From Narrative to Numbers: Evaluating Large Language Models for Ordinal Qualitative Coding of Likert-Type Responses

  • Bryce Pierce; 
  • Ting Dong; 
  • Erin Barry; 
  • Adam Biggs

ABSTRACT

Background:

Qualitative analysis helps interpret complex human experiences by identifying patterns not easily captured through quantitative measures. However, when applied at scale in healthcare and health professions education, qualitative coding is a labor-intensive task. As a result, large volumes of narrative data are often underutilized or inefficiently analyzed. Recent advances in artificial intelligence, particularly large language models (LLMs), have introduced new opportunities for scaling qualitative analysis through automated processing of narrative text. Prior work has demonstrated that LLMs can approximate human judgments across qualitative and evaluative applications, including thematic analysis and rubric-based assessment. Still, methodological questions remain regarding how these models assign meaning to narrative data and translate it into structured outputs, particularly when mapping responses onto ordinal Likert-type scales. It also remains unclear how prompt design influences the validity, reliability, and stability of LLM-generated qualitative coding.

Objective:

This article explores an aspect of AI coding with multiple applications in both research and assessment. Specifically, how effectively can AI interpret Likert-type inferences relative to a human.

Methods:

This study addresses these gaps by evaluating whether an LLM (Gemini) can infer ordinal Likert-type responses from narrative data. Using a dataset of paired Likert responses and narrative elaborations, multiple prompt conditions were systematically varied to assess their impact on distributional alignment and interrater reliability.

Results:

Across all prompt conditions, Gemini-generated outputs closely approximated the overall distribution of the original self-reported Likert responses while outperforming a human qualitative analyst in distributional alignment. Agreement at the individual response level remained moderate (κw between 0.44 and 0.52), and variations in prompt structure did not produce substantial changes in performance (Krippendorff’s α ranged from 0.69 to 0.76).

Conclusions:

These findings suggest that LLMs may infer latent meaning structures from narrative data and reproduce aggregate response patterns from an underlying distribution of human responses. LLMs may therefore serve as scalable tools for supporting qualitative coding of ordinal data in healthcare settings when combined with clear processing instructions, structured outputs, and human verification.


 Citation

Please cite as:

Pierce B, Dong T, Barry E, Biggs A

From Narrative to Numbers: Evaluating Large Language Models for Ordinal Qualitative Coding of Likert-Type Responses

JMIR Preprints. 16/04/2026:98538

DOI: 10.2196/preprints.98538

URL: https://preprints.jmir.org/preprint/98538

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.