Accepted for/Published in: JMIR Human Factors
Date Submitted: Dec 9, 2024
Date Accepted: May 12, 2025
Exploring Mental Health Content Moderation and Wellbeing Tools on Social Media Platforms: A Walkthrough Analysis.
ABSTRACT
Background:
Social networking site (SNS) users may experience mental health difficulties themselves or engage with mental health-related content on these platforms. Whilst SNSs employ moderation systems and user tools to limit harmful content availability, concerns persist regarding the implementation and effectiveness of these
Objective:
This study aimed to use an ethnographic walkthrough method to critically evaluate four SNSs - Instagram, TikTok, Tumblr, and Tellmi – focusing on their mental health content moderation and safety and wellbeing resources.
Methods:
Researchers completed walkthrough checklists for each of the SNS platforms, and then used thematic analysis to interpret the data.
Results:
Findings highlighted both successes and challenges in balancing user safety and content moderation across platforms. Whilst varied mental health resources were available on platforms, several issues emerged, including redundancy of information, broken links, and a lack of non US-centric resources. Additionally, despite the presence of several self-moderation tool options, there was insufficient evidence of user education and testing around these features, potentially limiting their effectiveness. Platforms also faced difficulties addressing harmful mental health content due to unclear language around what was allowed or disallowed. This was especially evident in the management of mental health related terminology, where the emergence of algospeak highlighted how users easily bypass platform censorship. Further, platforms did not detail support for reporters or reportees of mental health related content, leaving users vulnerable.
Conclusions:
Our study resulted in the production of preliminary recommendations for platforms regarding potential mental health content moderation and wellbeing procedures and tools. We also emphasised the need for more inclusive user-centred design, feedback, and research to improve SNS safety and moderation features.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.