Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR mHealth and uHealth

Date Submitted: Feb 6, 2021
Date Accepted: Sep 3, 2021

The final, peer-reviewed published version of this preprint can be found here:

Text Message Analysis Using Machine Learning to Assess Predictors of Engagement With Mobile Health Chronic Disease Prevention Programs: Content Analysis

Klimis H, Nothman J, Lu D, Sun C, Cheung NW, Redfern J, Thiagalingam A, Chow CK

Text Message Analysis Using Machine Learning to Assess Predictors of Engagement With Mobile Health Chronic Disease Prevention Programs: Content Analysis

JMIR Mhealth Uhealth 2021;9(11):e27779

DOI: 10.2196/27779

PMID: 34757324

PMCID: 8663456

Text Message Analysis using Machine Learning to Assess Predictors of Engagement with Mobile Health Chronic Disease Prevention programs: Content Analysis

  • Harry Klimis; 
  • Joel Nothman; 
  • Di Lu; 
  • Chao Sun; 
  • N Wah Cheung; 
  • Julie Redfern; 
  • Aravinda Thiagalingam; 
  • Clara K Chow

ABSTRACT

Background:

Text messages, as a form of mobile health (mHealth), are increasingly being used to support individuals with chronic disease in novel ways that leverage the mobility and capabilities of mobile phones. However, there are knowledge gaps with mHealth including how to maximise engagement.

Objective:

The aims were (1) to develop machine learning (ML) models to categorise program text messages and participant replies, and (2) to examine whether message charateristics were associated with: a) premature program stopping, and b) engagement.

Methods:

We assessed communication logs from text message-based chronic disease prevention studies that encouraged unidirectional (SupportMe/ITM) and bidirectional (TEXTMEDS) communication. Outgoing messages were manually categorised into five message intents (informative, instructional, motivational, supportive, and notification) and replies into seven groups (stop, thanks, questions, reporting healthy, reporting struggle, general comment, and other). Grid search with 10-fold cross validation was implemented to identify the best perfoming ML models and evaluated using nested cross-validation. Regression models with interaction terms were used to compare the association of message intent with a) premature program stopping, and b) engagement (replied at least three times and did not prematurely stop), in SupportMe/ITM and TEXTMEDS.

Results:

We analysed a total of 1,550 messages and 4,071 participant replies. 145/2642 (5.5%) participants responded ‘stop’, and 309/2642 (11.7%) participants were engaged. Our optimal ML model correctly classified program outgoing message intent with 76.6% (95% CI 63.5–89.8) and replies with 77.8% (95% CI 74.1–81.4) balanced accuracy. Overall, “supportive” (OR 0.53; 95% CI 0.35-0.81) messages were associated with reduced chance of stopping, as were “informative” messages in SupportMe/ITM (OR 0.35; 95% CI 0.20-0.60) but not in TEXTMEDS (P for interaction <0.001). “Notification” messages were associated with a higher chance of stopping in SupportMe/ITM (OR 5.76; 95% CI 3.66-9.06) but not TEXTMEDS (P for interaction=0.01). Overall, “Informative” (OR 1.76; 95% CI 1.46-2.12) and “instructional” (OR 1.47; 95% CI 1.21-1.80) messages were associated with higher engagement, but not “motivational” messages (P=0.37). For “supportive” messages, the association with engagement was opposite with SupportMe/ITM (OR 1.77; 95% CI 1.21-2.58) compared to TEXTMEDS (OR 0.77; 95% CI 0.60-0.98)(P for interaction <0.001). “Notification” messages were associated with reduced engagement in both SupportMe/ITM (OR 0.07; 95% CI 0.05-0.10) and TEXTMEDS (OR 0.28; 95% CI 0.20-0.39), but the strength of the association was greater in SupportMe/ITM (P for interaction <0.001).

Conclusions:

The ML models enable monitoring and detailed characterisation of program messages and participant replies. Outgoing message intent may influence premature program stopping and engagement, although the strength and direction of association appears to vary by program type. Future studies will need to examine whether modifying message characteristics can optimise engagement, and whether this leads to behaviour change.


 Citation

Please cite as:

Klimis H, Nothman J, Lu D, Sun C, Cheung NW, Redfern J, Thiagalingam A, Chow CK

Text Message Analysis Using Machine Learning to Assess Predictors of Engagement With Mobile Health Chronic Disease Prevention Programs: Content Analysis

JMIR Mhealth Uhealth 2021;9(11):e27779

DOI: 10.2196/27779

PMID: 34757324

PMCID: 8663456

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.