Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Human Factors

Date Submitted: Dec 9, 2024
Date Accepted: May 12, 2025

The final, peer-reviewed published version of this preprint can be found here:

Exploring Mental Health Content Moderation and Well-Being Tools on Social Media Platforms: Walkthrough Analysis

Haime Z, Biddle L

Exploring Mental Health Content Moderation and Well-Being Tools on Social Media Platforms: Walkthrough Analysis

JMIR Hum Factors 2025;12:e69817

DOI: 10.2196/69817

PMID: 40440699

PMCID: 12163353

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Exploring Mental Health Content Moderation and Wellbeing Tools on Social Media Platforms: A Walkthrough Analysis.

  • Zoë Haime; 
  • Lucy Biddle

ABSTRACT

Background:

Social networking site (SNS) users may experience mental health difficulties themselves or engage with mental health-related content on these platforms. Whilst SNSs employ moderation systems and user tools to limit harmful content availability, concerns persist regarding the implementation and effectiveness of these

Objective:

This study aimed to use an ethnographic walkthrough method to critically evaluate four SNSs - Instagram, TikTok, Tumblr, and Tellmi – focusing on their mental health content moderation and safety and wellbeing resources.

Methods:

Researchers completed walkthrough checklists for each of the SNS platforms, and then used thematic analysis to interpret the data.

Results:

Findings highlighted both successes and challenges in balancing user safety and content moderation across platforms. Whilst varied mental health resources were available on platforms, several issues emerged, including redundancy of information, broken links, and a lack of non US-centric resources. Additionally, despite the presence of several self-moderation tool options, there was insufficient evidence of user education and testing around these features, potentially limiting their effectiveness. Platforms also faced difficulties addressing harmful mental health content due to unclear language around what was allowed or disallowed. This was especially evident in the management of mental health related terminology, where the emergence of algospeak highlighted how users easily bypass platform censorship. Further, platforms did not detail support for reporters or reportees of mental health related content, leaving users vulnerable.

Conclusions:

Our study resulted in the production of preliminary recommendations for platforms regarding potential mental health content moderation and wellbeing procedures and tools. We also emphasised the need for more inclusive user-centred design, feedback, and research to improve SNS safety and moderation features.


 Citation

Please cite as:

Haime Z, Biddle L

Exploring Mental Health Content Moderation and Well-Being Tools on Social Media Platforms: Walkthrough Analysis

JMIR Hum Factors 2025;12:e69817

DOI: 10.2196/69817

PMID: 40440699

PMCID: 12163353

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.