Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Sep 6, 2023
Date Accepted: Mar 28, 2024

The final, peer-reviewed published version of this preprint can be found here:

Using Large Language Models to Support Content Analysis: A Case Study of ChatGPT for Adverse Event Detection

Leas EC, Ayers JW, Desai N, Dredze M, Hogarth M, Smith DM

Using Large Language Models to Support Content Analysis: A Case Study of ChatGPT for Adverse Event Detection

J Med Internet Res 2024;26:e52499

DOI: 10.2196/52499

PMID: 38696245

PMCID: 11099800

Using Large Language Models to Support Content Analysis: A Case Study of ChatGPT for Adverse Event Detection

  • Eric C. Leas; 
  • John W. Ayers; 
  • Nimit Desai; 
  • Mark Dredze; 
  • Michael Hogarth; 
  • Davey M. Smith

ABSTRACT

In a proof-of-concept study, we sought to assess whether ChatGPT could accurately perform a biomedical labeling tasks that had been previously performed by 5 human annotators. ChatGPT was given annotation instructions to find adverse events (AEs) that were identical to those given to the human annotators and was asked to analyze the same dataset of 10,000 posts to a forum discussing the use of a cannabis product called delta-8-THC. ChatGPT replicated the human annotations with a high accuracy for both general adverse event reports (97.7% agreement; Kappa = 0.95) and instances of serious adverse event reports e.g., a life-threatening event or hospitalization (98.0% agreement; Kappa = 0.95). Given the accuracy, we extended ChatGPT’s capabilities to annotate the entire archive of relevant data (N=76,247 posts), resulting in the discovery of substantially more potential adverse events and providing insights into the trends and patterns of AE reports over an extended period that would have required substantial effort for human annotators. The results hold promise for the use of ChatGPT for biomedical text analysis in a manner that extends the capabilities of human annotators and offers advantages over traditional natural language text classification techniques.


 Citation

Please cite as:

Leas EC, Ayers JW, Desai N, Dredze M, Hogarth M, Smith DM

Using Large Language Models to Support Content Analysis: A Case Study of ChatGPT for Adverse Event Detection

J Med Internet Res 2024;26:e52499

DOI: 10.2196/52499

PMID: 38696245

PMCID: 11099800

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.