Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Jul 23, 2025
Date Accepted: Jan 26, 2026

The final, peer-reviewed published version of this preprint can be found here:

AI-Assisted Rapid Quality Analysis in Implementation Science: Methodological Study

Adegbemijo A, MAW A, E. TRINKLEY K, A VARGHESE A, TULK JESSO S

AI-Assisted Rapid Quality Analysis in Implementation Science: Methodological Study

JMIR AI 2026;5:e81149

DOI: 10.2196/81149

PMID: 41941726

AI-assisted Rapid Qualitative Analysis for Implementation Science: Shifting from IF to HOW

  • Adeola Adegbemijo; 
  • ANNA MAW; 
  • KATY E. TRINKLEY; 
  • AMOOLYA A VARGHESE; 
  • STEPHANIE TULK JESSO

ABSTRACT

Background:

Translating evidence-based therapies from "bench to bedside" remains challenging and implementation Science (IS) experts are crucial for this process. Qualitative analyses are essential, but require extensive time and cost for manual coding. Now, many turn to Artificial Intelligence (AI) to accelerate the pace of qualitative analysis, but significant questions remain about the quality, validity, and ethics of applying Large Language Models (LLMs) like ChatGPT to qualitative data. To this end, we have developed a method for AI-assisted rapid qualitative analysis that addresses these concerns.

Objective:

We developed AI Assisted Rapid Qualitative Analysis for Implementation Science (AI-RQA) as an open-source encoder-based Small Language Models (SLM) to aid IS experts. SLMs hold advantages over LLMs in terms of processing efficiency, customizable training, and local maintenance of data that never goes “to the cloud”. We focus on two efficient and high-performing SLMs – DistilBERT and ELECTRA. The objective is to assess these models’ accuracy in reproducing expert coding, their generalizability to new coding scenarios, and enhancing their accessibility for non-technical experts through user-friendly tools.

Methods:

Two previously coded IS datasets were used to train DistilBERT and ELECTRA models. These datasets were coded by IS experts using a mixed deductive and inductive approach, with initial categories derived from the domains of an IS framework, PRISM. We fine-tuned and evaluated DistilBERT and ELECTRA on these datasets, measuring performance by AUPR and Cohen’s Kappa. To facilitate use by non-programmers, we then developed an open-source Python package (pytranscripts) to streamline transcript processing, model classification, and evaluation. Additionally, a companion Streamlit web application allows users to upload interview transcripts and obtain automated coding and analytics without any coding expertise.

Results:

Our findings demonstrate the success of leveraging SMLs to significantly accelerate qualitative analysis while maintaining high levels of accuracy and agreement with human annotators, though results are not universal and depend on how researchers approach qualitative coding. On the original dataset, DistilBERT achieved near-perfect agreement with human coders (Cohen’s Kappa = 0.95), while ELECTRA showed substantial agreement (Cohen’s Kappa = 0.71). However, both models’ performance declined on the second, more ambiguous dataset, with DistilBERT’s Cohen’s Kappa dropping to 0.48 and ELECTRA’s to 0.39. Two primary drivers of performance drop appear to be related to the number of codes applied to the dataset, and if coders apply multiple codes to each piece of data, or constrain themselves to applying one.

Conclusions:

This work demonstrates that SLMs can meaningfully assist qualitative researchers with coding tasks as long as attention is paid to how experts code data that will train the SLM. This can be especially valuable in settings where deploying LLMs is impractical or undesirable.


 Citation

Please cite as:

Adegbemijo A, MAW A, E. TRINKLEY K, A VARGHESE A, TULK JESSO S

AI-Assisted Rapid Quality Analysis in Implementation Science: Methodological Study

JMIR AI 2026;5:e81149

DOI: 10.2196/81149

PMID: 41941726

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.