Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Aug 6, 2025
Open Peer Review Period: Aug 6, 2025 - Oct 1, 2025
Date Accepted: Jan 18, 2026
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

AI-Generated Images of Substance Use and Recovery: Mixed Methods Case Study

Heley K, Hom JK, Laestadius L

AI-Generated Images of Substance Use and Recovery: Mixed Methods Case Study

JMIR AI 2026;5:e81977

DOI: 10.2196/81977

PMID: 41712852

PMCID: 12919905

Stigma by Default?: A Mixed Methods Case Study of AI-Generated Images of Substance Use and Recovery

  • Kathryn Heley; 
  • Jeffrey K. Hom; 
  • Linnea Laestadius

ABSTRACT

Background:

Images created with generative artificial intelligence tools are increasingly used for health communication due to their ease of use, speed, accessibility, and low cost. However, AI-generated images may bring practical and ethical risks to health practitioners and the public, including through the perpetuation of stigma against vulnerable and historically marginalized groups.

Objective:

To understand the potential value of AI-generated images for health care and public health communication, we sought to analyze images of substance use disorder and recovery generated with ChatGPT. Specifically, we sought to investigate: (1) the default visual outputs produced in response to a range of prompts about substance use disorder and recovery, and (2) the extent to which prompt modification and guideline-informed prompting could mitigate potentially stigmatizing imagery.

Methods:

We performed a mixed-methods case study examining depictions of substance use and recovery in images generated by ChatGPT 4.o. We generated images (n=84) using 1) prompts with colloquial and stigmatizing language, 2) prompts that follow best practices for person-first language, 3) image prompts written by ChatGPT, and 4) a custom GPT informed by guidelines for images of SUD. We then used a mixed-methods approach to analyze images for demographics and stigmatizing elements.

Results:

Images produced in the default ChatGPT model featured primarily white men (86%). Further, images tended to be stigmatizing, featuring injection drug use, dark colors, and symbolic elements such as chains. These trends persisted even when person-first language prompts were used. Images informed by guidelines were markedly less stigmatizing, however, they featured almost only Black women (74%).

Conclusions:

Our findings confirm prior research about stigma and biases in AI-generated images and extend this literature to substance use. However, our findings also suggest that 1) images can be improved when clear guidelines are provided and 2) even with guidelines, iteration is needed to create an image that fully concords with best practices.


 Citation

Please cite as:

Heley K, Hom JK, Laestadius L

AI-Generated Images of Substance Use and Recovery: Mixed Methods Case Study

JMIR AI 2026;5:e81977

DOI: 10.2196/81977

PMID: 41712852

PMCID: 12919905

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.