Accepted for/Published in: JMIR Formative Research
Date Submitted: Dec 22, 2023
Date Accepted: Mar 14, 2024
Date Submitted to PubMed: Mar 19, 2024
Assessing the Accuracy of Generative Conversational Artificial Intelligence in Debunking Sleep Health Myths: Mixed-Methods Comparative Study with Expert Analysis
ABSTRACT
Background:
Adequate sleep is essential for maintaining both individual and public health, positively affecting cognition and well-being, and reducing chronic disease risks. It also plays a significant role in driving the economy, public safety, and managing healthcare costs. Digital tools, including websites, sleep trackers, and apps, are key in promoting sleep health education. Conversational Artificial Intelligence (AI) like ChatGPT offers accessible, personalized advice on sleep health but raises concerns about potential misinformation. This underscores the importance of ensuring AI-driven sleep health information is accurate, given its significant impact on individual and public health, and the spread of sleep-related myths.
Objective:
The study aims to examine ChatGPT’s capability to debunk sleep-related disbeliefs.
Methods:
ChatGPT was asked to categorize twenty sleep-related false myths identified by ten sleep experts and to rate them in terms of falseness and public health significance, on a 5-point Likert scale. Sensitivity, positive predictive value, and inter-rater agreement were also calculated.
Results:
ChatGPT labeled a significant portion (85%, n=17) of the statements as "false" (45%, n=9) or "generally false" (40%, n=8), with varying accuracy across different domains. For instance, it correctly identified most myths about "sleep timing", "sleep duration", and "behaviors during sleep", while it had varying degrees of success with other categories like "pre-sleep behaviors" and "brain function and sleep". ChatGPT's assessment of the degree of falseness and public health significance, on the 5-point Likert scale, showed an average score of 3.45 (SD=0.85) and 3.15 (SD=0.96), respectively, indicating a good level of accuracy in identifying the falseness of statements and a good understanding of their impact on public health. The AI-based tool showed a sensitivity of 85% and a perfect positive predictive value of 100%. Overall, this indicates that when ChatGPT labels a statement as false, it is highly reliable, but it may miss identifying some false statements. When comparing with expert ratings, high intra-class correlation coefficients (ICCs) between ChatGPT’s appraisals and expert opinions could be found, suggesting that the AI’s ratings were generally aligned with expert views on falseness (ICC=.83, P<.0001) and public health significance (ICC=.79, P=.001) of sleep-related myths.
Conclusions:
ChatGPT-4 can accurately address sleep-related queries and debunk sleep-related myths, with a performance comparable to sleep experts, even if, given its limitations, the AI cannot completely replace expert opinions, especially in nuanced and complex fields like sleep health, but can be a valuable complement in the dissemination of updated information and promotion of healthy behaviors.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.