Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 26, 2025
Open Peer Review Period: Feb 27, 2025 - Apr 24, 2025
Date Accepted: Aug 6, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Quality of Cancer-Related Information on New Media (2014-2023): Systematic Review and Meta-Analysis

Liu XJ, Valdez D, Parker MA, Mai A, Walsh-Buhi E

Quality of Cancer-Related Information on New Media (2014-2023): Systematic Review and Meta-Analysis

J Med Internet Res 2025;27:e73185

DOI: 10.2196/73185

PMID: 41061257

PMCID: 12547337

Quality of Cancer-Related Information on New Media: Systematic Review and Meta-Analysis (2014–2023)

  • Xue-Jing Liu; 
  • Danny Valdez; 
  • Maria A Parker; 
  • Andi Mai; 
  • Eric Walsh-Buhi

ABSTRACT

Background:

New media have become vital sources of cancer-related health information, offering patients, caregivers, and the public a platform to share knowledge and experiences. However, concerns about the quality of that information persist.

Objective:

This study aims to identify characteristics of studies considering cancer-related information on new media (including social media and AI chatbots), analyze patterns in information quality across different platforms, cancer types, and evaluation tools, and synthesize the quality levels of the information.

Methods:

We systematically searched PubMed, Web of Science, Scopus, and Medline databases for peer-reviewed studies published in English between 2014 and 2023 that evaluated the quality of cancer-related information on social media and AI chatbots. The validity of the included studies was assessed based on risk of bias, reporting quality, and ethical approval, using the JBI Critical Appraisal and the STROBE checklists. Features of platforms, cancer types, evaluation tools, and trends were summarized. Ordinal logistic regression was used to estimate the associations between the conclusion of quality assessments and media type, cancer type, as well as factors related to the searching, rating, and reporting process. A random-effects meta-analysis of proportions was conducted to synthesize the overall levels of information quality and corresponding 95% confidence intervals for each assessment indicator.

Results:

A total of 75 studies were included, encompassing 297,519 posts related to 17 cancer types across 15 media platforms. Of these, 20 studies focused on text-based social media (e.g., Twitter), 51 on video-based social media (e.g., YouTube, TikTok), and 4 on AI chatbot platforms (e.g., ChatGPT). Studies focusing on video-based media (OR = 0.02, 95% CI: 0.01-0.12), rare cancers (OR = 0.32, 95% CI: 0.16-0.65), and combined cancer types (OR = 0.04, 95% CI: 0.01-0.14) were statistically less likely to yield higher quality conclusions compared to those on text-based media and common cancers. The pooled estimates reported moderate overall quality (DISCERN=43.58, 95% CI: 37.80-49.35; Global Quality Score, GQS=49.91, 95% CI: 43.31-56.50), moderate technical quality (Journal of the American Medical Association Benchmark Criteria, JAMA-BC=46.13, 95% CI: 38.87-53.39; Health on the Net Foundation Code of Conduct, HONcode=49.68, 95, CI: 19.68-79.68), moderate-high understandability (Patient Education Material Assessment Tool for Understandability, PEMAT-U=66.92, 95% CI: 59.86-73.99), moderate-low actionability (PEMAT-A=37.24, 95% CI: 18.08-58.68]; usefulness=48.86, 95% CI: 26.24-71.48) and moderate-low completeness (34.22, 95% CI: 27.96-40.48). Furthermore, 27.15% (95% CI: 21.36-33.35) of posts contained misinformation, 21.15% (95% CI: 8.96-36.50) contained harmful information, and 12.46% (95% CI: 7.52-17.39) contained commercial bias.

Conclusions:

Meta-analysis results revealed substantial statistical heterogeneity and evidence of publication bias in studies assessing misinformation. The overall quality of cancer-related information on social media and AI chatbots was moderate, with relatively higher scores for understandability but lower scores for actionability and completeness. A notable proportion of content contained misleading, harmful, or commercially biased information, posing potential risks to users. To support informed decision-making in cancer care, it is essential to improve the quality of information delivered through these media platforms. Clinical Trial: PROSPERO registration: CRD420251058032; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251058032


 Citation

Please cite as:

Liu XJ, Valdez D, Parker MA, Mai A, Walsh-Buhi E

Quality of Cancer-Related Information on New Media (2014-2023): Systematic Review and Meta-Analysis

J Med Internet Res 2025;27:e73185

DOI: 10.2196/73185

PMID: 41061257

PMCID: 12547337

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.