Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Mar 23, 2025
Open Peer Review Period: Mar 24, 2025 - May 19, 2025
Date Accepted: Nov 24, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Predictors of Professional Responses in Nonprofit Mental Health Forums: Interpretable Machine Learning Analysis

Geng S, Li Y, Wang J, Chen P, Wu X, Zhang Z

Predictors of Professional Responses in Nonprofit Mental Health Forums: Interpretable Machine Learning Analysis

J Med Internet Res 2026;28:e74359

DOI: 10.2196/74359

PMID: 41490001

PMCID: 12817036

Understanding Response Patterns in Nonprofit Mental Health Forums Through Interpretable Machine Learning

  • Shuang Geng; 
  • Yanghui Li; 
  • Jie Wang; 
  • Peixuan Chen; 
  • Xusheng Wu; 
  • Zhiqun Zhang

ABSTRACT

Background:

Online mental health communities can increase service accessibility and equity for patients seeking psychological assistance and therapeutic interventions. The increasing demands and contributions of users constitute pivotal elements underpinning the development of these communities. Although prior studies have examined the factors influencing physicians’ contribution behaviors in online consultation platforms, limited attention has been given to how various post characteristics affect the quantity and length of professional responses in nonprofit mental health communities.

Objective:

This study aims to examine how various textual (i.e., topic, sentiment, title length, and content length) and contextual (i.e., page views and posting time) characteristics of inquiries in nonprofit mental health forums influence the quantity and length of responses from mental health professionals, thereby providing insights for enhancing the effectiveness of community interactions.

Methods:

We collected 18,572 Q&A records from a Chinese online mental health platform from August 2024 to July 2025. Topic features were extracted using BERTopic, and sentiment features were obtained using a DistilBERT-based sentiment classification model. Additional features were derived from post metadata. We compared five machine learning models and identified LightGBM as the best performer. We, then, applied Shapley Additive Explanations (SHAP) analysis to it to evaluate the feature contributions to the prediction of response quantity and length.

Results:

In virtual mental health communities, user inquiries fall into seven topic categories: work, love, depression, boyfriends or girlfriends, school, marriage, and family. Depression-related topics negatively predict response quantity, whereas interpersonal, school, marriage, or family topics are positively correlated. SHAP analysis revealed that page views (SHAP value = 0.187) and title length (SHAP value = 0.073) are key factors in predicting response quantity, and content length (SHAP value = 0.274), sentiment category (SHAP value = 0.054) and title length (SHAP value = 0.053) are key factors in predicting response length. Posts exhibiting negative emotions are positively related to both the predicted quantity and length of responses, and this effect becomes more pronounced as the degree of emotional intensity increases. Titles with 15–20 characters and content with more than 60 characters are positively correlated with responses, whereas titles with fewer than 7 characters have negative effects. Higher view counts and weekday posts also increase response likelihood.

Conclusions:

This study provides important insights into the influence of the textual and contextual features of patient posts on the quantity and length of professional responses. It deepens the understanding of voluntary knowledge contribution behaviors in online mental health communities. These findings provide practical guidance for community administrators in terms of optimizing platform functional design and providing post writing tips for patients to increase their likelihood of receiving responses. Future researchers are encouraged to address the limitation of this study, which focuses solely on response quantity and length, and to explore details of professional responses, such as by developing a comprehensive measure of response quality.


 Citation

Please cite as:

Geng S, Li Y, Wang J, Chen P, Wu X, Zhang Z

Predictors of Professional Responses in Nonprofit Mental Health Forums: Interpretable Machine Learning Analysis

J Med Internet Res 2026;28:e74359

DOI: 10.2196/74359

PMID: 41490001

PMCID: 12817036

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.