Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
JMIR Preprints
A preprint server for pre-publication/pre-peer-review preprints intended for community review as well as ahead-of-print (accepted) manuscripts
Background: Over the past decade, the proportion of the world's population aged 65 and above has grown exponentially, raising significant challenges such as social isolation and loneliness among this population. Assistive technologies have shown potential in enhancing the quality of life of older adults by improving their physical, cognitive, communication, and so forth. Research has shown that smart televisions (TVs) are user-friendly and commonly used among older adults. However, smart TVs have been underutilized as assistive technologies. Objective: The aim of the study is to explore the state-of-the-art of utilizing smart TVs as assistive technologies for older adults in improving their communications and social lives. Methods: The search was conducted following the guidelines for performing systematic literature review, which included six databases, i.e., IEEE, ACM, Google Scholar, ScienceDirect, Engineering Village and Springer. A range of keywords were used in different combinations, including ‘smart TV’, ‘older adults’, ‘elderly’, ‘communication’, ‘messaging’, ‘video call’ and ‘application’. A set of inclusion and exclusion criteria were defined prior to the search and the screening was performed by three researchers. Based on the aim of the review, inclusion and exclusion criteria, we analyzed the included papers. None of the papers were subjected to quantitative synthesis due to the significant variations in the data measured. Results: After screening 2671 records, from the abstract level to full text, 30 papers were identified as relevant studies, demonstrating both direct and indirect impacts on the social lives of older adults through the use of smart TVs as assistive technology. Some papers were parts of the same and/or a larger study, which makes the number of actual projects even smaller. This indicates that smart TVs have been little utilized as assistive technologies for enhancing older adults’ communication and social lives. Most papers proposed their own prototype, and these prototypes were mostly targeted for use at home, while some at geriatric care units or nursing homes. User involvement among older adults was high among the included papers, and some also included other users such as healthcare personnel, administrative staff, and engineers. The included studies were mostly from Europe. Conclusions: This review highlights the potential of smart TVs as assistive technologies to enhance social connectivity among older adults but identifies several research gaps. Most studies focus on short-term usability and are geographically limited to Europe. Future research should include longitudinal studies, explore diverse cultural attitudes, and focus on adaptive solutions for various health conditions. We hope this review can inspire research on smart TVs as assistive technologies, enhancing social interactions and quality of life for older adults.
Journal Description
JMIR Preprintscontains pre-publication/pre-peer-review preprints intended for community review (FAQ: What are Preprints?). For a list of all preprints under public review click here. The NIH and other organizations and societies encourage investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work. JMIR Publications facilitates this by allowing its authors to expose submitted manuscripts on its preprint server with a simple checkbox when submitting an article, and the preprint server is also open for non-JMIR authors.
With the exception of selected submissions to the JMIR family of journals (where the submitting author opted in for open peer-review, and which are displayed here as well for open peer-review), there is no editor assigning peer-reviewers.
Submissions are open for anybody to peer-review. Once two peer-review reports of reasonable quality have been received, we will send these peer-review reports to the author, and may offer transfer to a partner journal, which has its own editor or editorial board.
The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
If authors want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc) after peer-review, please specify this in the cover letter. Simply rank the journals and we will offer the peer-reviewed manuscript to these editors in the order of your ranking.
If authors do NOT wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter.
JMIR Preprints accepts manuscripts at no costs and without any formatting requirements (but if you intend the submission to be published eventually by a specific journal, it is of advantage to follow their instructions for authors). Authors may even take a WebCite snapshot of a blog post or "grey" online report. However, if the manuscript is already peer-reviewed and formally published elsewhere, please do NOT submit it here (this is a preprint server, not a postprint server!).
JMIR Preprints is a preprint server and "manuscript marketplace" with manuscripts that are intended for community review. Great manuscripts may be snatched up by participating journals which will make offers for publication.There are two pathways for manuscripts to appear here: 1) a submission to a JMIR or partner journal, where the author has checked the "open peer-review" checkbox, 2) Direct submissions to the preprint server.
For the latter, there is no editor assigning peer-reviewers, so authors are encouraged to nominate as many reviewers as possible, and set the setting to "open peer-review". Nominated peer-reviewers should be arms-length. It will also help to tweet about your submission or posting it on your homepage.
For pathway 2, once a sufficient number of reviews has been received (and they are reasonably positive), the manuscript and peer-review reports may be transferred to a partner journal (e.g. JMIR, i-JMR, JMIR Res Protoc, or other journals from participating publishers), whose editor may offer formal publication if the peer-review reports are addressed. The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
For pathway 2, if authors do not wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter. Also, note if you want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc), please specify this in the cover letter.
Manuscripts can be in any format. However, an abstract is required in all cases. We highly recommend to have the references in JMIR format (include a PMID) as then our system will automatically assign reviewers based on the references.
Background: Over the past decade, the proportion of the world's population aged 65 and above has grown exponentially, raising significant challenges such as social isolation and loneliness among this...
Background: Over the past decade, the proportion of the world's population aged 65 and above has grown exponentially, raising significant challenges such as social isolation and loneliness among this population. Assistive technologies have shown potential in enhancing the quality of life of older adults by improving their physical, cognitive, communication, and so forth. Research has shown that smart televisions (TVs) are user-friendly and commonly used among older adults. However, smart TVs have been underutilized as assistive technologies. Objective: The aim of the study is to explore the state-of-the-art of utilizing smart TVs as assistive technologies for older adults in improving their communications and social lives. Methods: The search was conducted following the guidelines for performing systematic literature review, which included six databases, i.e., IEEE, ACM, Google Scholar, ScienceDirect, Engineering Village and Springer. A range of keywords were used in different combinations, including ‘smart TV’, ‘older adults’, ‘elderly’, ‘communication’, ‘messaging’, ‘video call’ and ‘application’. A set of inclusion and exclusion criteria were defined prior to the search and the screening was performed by three researchers. Based on the aim of the review, inclusion and exclusion criteria, we analyzed the included papers. None of the papers were subjected to quantitative synthesis due to the significant variations in the data measured. Results: After screening 2671 records, from the abstract level to full text, 30 papers were identified as relevant studies, demonstrating both direct and indirect impacts on the social lives of older adults through the use of smart TVs as assistive technology. Some papers were parts of the same and/or a larger study, which makes the number of actual projects even smaller. This indicates that smart TVs have been little utilized as assistive technologies for enhancing older adults’ communication and social lives. Most papers proposed their own prototype, and these prototypes were mostly targeted for use at home, while some at geriatric care units or nursing homes. User involvement among older adults was high among the included papers, and some also included other users such as healthcare personnel, administrative staff, and engineers. The included studies were mostly from Europe. Conclusions: This review highlights the potential of smart TVs as assistive technologies to enhance social connectivity among older adults but identifies several research gaps. Most studies focus on short-term usability and are geographically limited to Europe. Future research should include longitudinal studies, explore diverse cultural attitudes, and focus on adaptive solutions for various health conditions. We hope this review can inspire research on smart TVs as assistive technologies, enhancing social interactions and quality of life for older adults.
Background: Chemical ocular injuries are a major public health issue, causing eye damage from harmful chemicals and potentially leading to severe vision loss or blindness if not treated promptly and e...
Background: Chemical ocular injuries are a major public health issue, causing eye damage from harmful chemicals and potentially leading to severe vision loss or blindness if not treated promptly and effectively. Although medical knowledge has advanced, accessing reliable and understandable information on these injuries remains challenging due to unverified online content and complex terminology. Artificial Intelligence (AI) tools like ChatGPT provide a promising solution by simplifying medical information and making it more accessible to the general public. Objective: This study aims to assess the use of ChatGPT in providing reliable, accurate, and accessible medical information on chemical ocular injuries. It evaluates the correctness, thematic accuracy, and coherence of ChatGPT’s responses compared to established medical guidelines and explores its potential
for patient education. Methods: Nine questions were entered to ChatGPT regarding various aspects of chemical ocular injuries, including : definition, prevalence, etiology, prevention, symptoms, diagnosis, treatment, follow- up, and complications. The responses provided by ChatGPT were compared to the ICD-9 and ICD-10 guidelines for chemical (alkali and acid) injuries of the conjunctiva and cornea. The evaluation focused on criteria such as correctness, thematic accuracy, coherence to assess the accuracy of ChatGPT’s answers. The inputs were categorized into three distinct groups, and statistical analyses, including Flesch–Kincaid readability tests, ANOVA, and trend analysis, were conducted to assess their readability, complexity and trends. Results: The results showed that ChatGPT provided accurate and coherent responses for most questions about chemical ocular injuries, demonstrating thematic relevance. However, the responses sometimes overlooked critical clinical details or guideline-specific elements, such as emphasizing the urgency of care, using precise classification systems, and addressing detailed diagnostic or management protocols. While the answers were generally valid, they occasionally included less relevant or overly generalized information, reducing their consistency with established medical guidelines. The average FRES was 33.84 ± 0.28, indicating a fairly challenging reading level, while the FKGL averaged 14.21 ± 0.22, suitable for readers with college-level proficiency. Passive voice was used in 7.22% ± 0.66% of sentences, indicating moderate reliance. Statistical analysis showed no significant differences in FRES (p =.385), FKGL (p =.555), or passive sentence usage (p =.601) across categories, as determined by one-way ANOVA. Readability remained relatively constant across the three categories as determined by trend analysis . Conclusions: ChatGPT shows strong potential in providing accurate and relevant information about chemical ocular injuries. However, Its language complexity may prevent accessibility for individuals with lower health literacy and sometimes miss critical aspects. Future improvements should focus on enhancing readability, increasing context-specific accuracy, and tailoring responses to person needs and literacy levels. While ChatGPT can be a helpful tool for patients and healthcare professionals, it shouldn’t replace professional medical advice, as some responses might not match clinical practice or address the needs of patients with different levels of education.
Background: SPAN@DEM emerged from the recognition that existing cluster-level advocacy groups are inadequate to address the specific needs of the emergency department (ED). Moreover, the fast-paced, h...
Background: SPAN@DEM emerged from the recognition that existing cluster-level advocacy groups are inadequate to address the specific needs of the emergency department (ED). Moreover, the fast-paced, high-pressure nature of emergency medicine presents distinct challenges for patient advocacy. As the first ED-specific advocacy group in Singapore, SPAN@DEM represents a significant step forward in local patient advocacy efforts because it uses a shared collaborative model to address patient needs and concerns within the unique context of the ED environment. Objective: In this article, we aim to share our journey in setting up our patient advocacy group, discuss the challenges and considerations and reflect on our lessons learnt throughout this process. Methods: A start-up committee comprising emergency physicians and patient advocates was formed to explore the processes required to create such an organisation. Some important features of SPAN@DEM include co-leadership between emergency physician and patient advocate, and diverse composition with equal representation from healthcare workers and advocates. SPAN@DEM conducts quarterly meetings with informal luncheons during meetings to foster open communication between advocates and healthcare staff. Membership is voluntary and based purely on altruism and members must participate in mandatory advocacy trainings to empower them to provide more actionable insights. Results: SPAN@DEM has initiated several projects thus far, such as PIKACHU (a quality improvement project which led to improved patient and next-of-kin satisfaction rates and decreased formal communication-related complaints) and Digital FAQ (a patient-friendly resource to explain ED processes) in addition to communication workshops for junior doctors and wayfinding projects. SPAN@DEM advocates have also actively contributed to the planning, design and transition to the new Emergency Medicine Building. More importantly, SPAN@DEM has fostered a cultural shift towards patient-centric care within the department as the department now works closely with patient advocates on day-to-day decisions for matters concerning patient and next-of-kin experience. Conclusions: SPAN@DEM demonstrates the value of specialised department-specific advocacy groups in shaping the future of patient-centred emergency care. This model may serve as an exemplar for other healthcare institutions seeking to promote patient advocacy efforts. Clinical Trial: N/A
Background: The high and increasing rates of poor mental health in young people are a global concern. Experiencing poor mental health during this formative stage of life can adversely impact interpers...
Background: The high and increasing rates of poor mental health in young people are a global concern. Experiencing poor mental health during this formative stage of life can adversely impact interpersonal relationships, academic and professional performance, and future health and well-being if not addressed early. Yet only a minority of those in need seek help. Research indicates that young people perceive digital mental health support as having many benefits compared to traditional face-to-face services. However, the effectiveness of self-guided digital mental health services is not well documented, and research on their cost-effectiveness is missing.
Mindhelper is Denmark's largest open-access, digital, self-guided mental health service for young people. While it does not provide direct psychological or therapeutic care, it offers practical strategies and tools to promote well-being and address a broad spectrum of mental health challenges, from everyday stress to more complex issues. Despite its widespread use, the effectiveness of Mindhelper has not been evaluated. Objective: This trial aims to evaluate the effectiveness of building on the results of our feasibility study. We will assess Mindhelper's impact on mental health and well-being, psychological functioning, intentions of help-seeking, and body appreciation among 15-25-year-olds, and provide insights into the service’s cost-effectiveness. Methods: 4,910 15-25-year-olds will be recruited via social media and randomized and allocated to an intervention group (receiving information about Mindhelper.dk) or a control group (no information about Mindhelper.dk). Outcomes are self-assessed and collected at baseline and 2-, 6-, and 12-weeks post-randomization though online surveys, and analyzed using the intention-to-treat approach. Qualitative interviews with intervention group participants will provide complementary insights, and a cost-effectiveness analysis will also be conducted. Results: Data are not collected. Conclusions: This study will deliver crucial evidence on the effectiveness of self-guided digital mental health promotion targeting young people. If effective, this highly scalable service may contribute to combating the trend of rising mental health issues among young people and address key challenges in primary care by delivering timely, coordinated, and effective services to young individuals, potentially at a low cost. Clinical Trial: ClinicalTrials.gov NCT06385457; https://clinicaltrials.gov/ct2/show/NCT06385457
Background: Parkinson’s Disease (PD) is the fastest-growing neurodegenerative disorder in the world, with prevalence expected to exceed 12 million by 2040, which poses significant healthcare and soc...
Background: Parkinson’s Disease (PD) is the fastest-growing neurodegenerative disorder in the world, with prevalence expected to exceed 12 million by 2040, which poses significant healthcare and societal challenges. Artificial intelligence (AI) systems and wearable sensors hold the potential for PD diagnosis, personalized symptom monitoring, and progression prediction. Nonetheless, ethical AI adoption requires several core principles, including user trust, transparency, fairness, and human oversight. Objective: This study aimed to gather and analyze the perspectives of key stakeholders, including individuals with PD, healthcare professionals, AI technical experts, and bioethical experts, to inform the design of trustworthy AI-based digital solutions for PD diagnosis and management within the AI-PROGNOSIS European project. Methods: An exploratory qualitative approach, based on two datasets constructed from co-creation workshops, engaged key stakeholders with diverse expertise to gather insights, ensuring a broad range of perspectives and enriching the thematic analysis. A total of 23 participants participated in the co-creation workshops, including 11 people with PD, six healthcare professionals, three AI technical experts, one bioethics expert, and three facilitators. Using a semi-structured guide, key aspects of the discussion centered on trust, fairness, explainability, autonomy, and the psychological impact of AI in PD care. Results: Thematic analysis of the co-creation workshop transcripts identified five key main themes, each explored through various corresponding subthemes. AI Trust and Security (Theme 1) was highlighted, focusing on data safety and the accuracy and reliability of the AI systems. AI Transparency and Education (Theme 2) emphasized the need for educational initiatives and the importance of transparency and explainability of AI technologies. AI Bias (Theme 3) was identified as a critical theme, addressing issues of bias and fairness and ensuring equitable access to AI-driven healthcare solutions. Human Oversight (Theme 4) stressed the significance of AI-human collaboration and the essential role of human review in AI processes. Lastly, AI Psychological Impact (Theme 5) examined the emotional impact of AI on patients and how AI is perceived in the context of PD care. Conclusions: Our findings underline the importance of implementing robust security measures, developing transparent and explainable AI models, reinforcing bias mitigation and reduction strategies and equitable access to treatment, integrating human oversight, and considering the psychological impact of AI-assisted healthcare. These insights provide actionable guidance for developing trustworthy and effective AI-driven digital PD diagnosis and management solutions. Clinical Trial: na
Background: In recent years, digital technologies have shown possibilities for improving cognitive function after stroke, but their effectiveness and treatment options vary, the optimal treatment rema...
Background: In recent years, digital technologies have shown possibilities for improving cognitive function after stroke, but their effectiveness and treatment options vary, the optimal treatment remains unclear, and the current evidence is somewhat contradictory. Objective: To evaluate the efficacy of various digital interventions in improving post-stroke cognitive function and provide evidence-based support for clinical decision-making. Methods: This study adhered to the PRISMA guidelines for systematic review. We conducted searches in PubMed, Web of Science, Cochrane Library, Scopus, EMBASE, and CNKI databases from inception to January 2025. Outcomes included the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Randomized controlled trials (RCTs) were included. Results: A total of 2,128 articles were retrieved, with 27 meeting the inclusion criteria. Compared to conventional rehabilitation or care, computer-assisted cognitive therapy (CACT) demonstrated significant superiority in MoCA scores (MD=2.36, 95% CI=1.45 to 3.27; SUCRA=83.2%); while cognitive training (CCT) demonstrated no statistical difference (MD=0.28, 95% CI=−0.81 to 1.37). For MMSE scores, robot-assisted therapy (RAT) ranked highest in efficacy (MD=5.77, 95% CI=3.47–8.08; SUCRA=99%); whereas both virtual reality (VR) (MD=0.69, 95% CI=−0.93 to 2.31) and CCT (MD=0.78, 95% CI=−1.19 to 2.75) showed no significant improvement. Conclusions: Digital therapies effectively improve cognitive function in post-stroke patients. CACT exhibited superior efficacy in MoCA (which emphasizes executive function), while RAT ranked highest in MMSE (which focuses on basic cognition), suggesting distinct domain-specific effects. However, caution is warranted due to heterogeneity, risk of bias, and limited sample sizes in the included studies. Future research should focus on optimizing intervention protocols, integrating neuroregulatory or traditional Chinese rehabilitation techniques, and exploring cost-effective strategies for clinical implementation. Clinical Trial: CRD420251006601
Background: Suicide-related internet use encompasses various online behaviours, including searching for suicide methods, sharing suicidal thoughts, and seeking help. Research suggests that suicide-rel...
Background: Suicide-related internet use encompasses various online behaviours, including searching for suicide methods, sharing suicidal thoughts, and seeking help. Research suggests that suicide-related internet use is prevalent among people experiencing suicidality, but its characteristics among mental health patients remain underexplored. Objective: This study examines the sociodemographic, clinical, and suicidality-related characteristics of suicidal mental health patients who engage in suicide-related internet use compared to those who do not. Methods: A cross-sectional survey was conducted from June to December 2023, recruiting participants aged 18 and older with recent contact with secondary mental health services in the UK. The survey assessed sociodemographic characteristics, psychiatric diagnoses, suicidal thoughts and behaviours, and engagement in suicide-related internet use. Statistical analyses included chi-square tests, Wilcoxon tests, and multivariable logistic regression to identify predictors of engaging in suicide-related internet use. Results: Of 696 participants, 75% engaged in suicide-related internet use in the past 12 months. Those who engaged in suicide-related internet use were almost three times as likely to have attempted suicide in the past year (32.5% vs. 9.2%, p < .001). They were more likely to have a diagnosis of personality disorder (34.4% vs. 18.5%, p < .001) and to disclose suicidal thoughts to someone (87.8% vs. 72.8%, p < .001). They also reported higher levels of suicidal ideation intensity (median VAS score = 6.6 vs. 5.1, p < .001). There were no significant sociodemographic differences between groups, including age. Conclusions: The findings suggest that suicide-related internet use is a common behaviour among suicidal mental health patients across various age groups, challenging the notion that it is primarily a concern for younger populations. The association between suicide-related internet use and increased suicidality highlights the need for clinicians to incorporate discussions about online behaviours in suicide risk assessments. Given the high rate of disclosure of suicidal thoughts among suicide-related internet use users, clinicians may have an opportunity to engage in open, non-judgmental discussions about their patients' internet use.
Recent advancements in cognitive neuroscience and digital technology have significantly accelerated the adoption of digital therapeutics for cognitive impairment. This review provides a comprehensive...
Recent advancements in cognitive neuroscience and digital technology have significantly accelerated the adoption of digital therapeutics for cognitive impairment. This review provides a comprehensive overview of the innovative applications of digital therapeutics in the assessment, intervention, management and monitoring of cognitive disorders, while highlighting key challenges that impede their widespread integration into clinical practice. Drawing on the definition of cognitive digital therapeutics and the multi-stakeholder collaboration required for its development and implementation, this study explores the role of digital technologies in cognitive health and explores challenges from multiple perspectives, including clinical practice, policy framework, user adoption, ethics and privacy, and data interoperability and security. Additionally, the study offers strategic recommendations to promote the sustainable development and scalable implementation of cognitive digital therapeutics, particularly as artificial intelligence (AI) technologies continue to advance, addressing both current applications and emerging challenges.
Background: Open-door policies are recommended to reduce coercion in psychiatric wards, but evidence integrating patient perspectives on psychiatric facility openness and safety measures is limited. T...
Background: Open-door policies are recommended to reduce coercion in psychiatric wards, but evidence integrating patient perspectives on psychiatric facility openness and safety measures is limited. Traditional qualitative frameworks often lack the scope necessary to fully capture these views. We hypothesized that patients would express a preference for open-door policies and demonstrate an apprehensive stance toward closed-door wards, with an emphasis on autonomy and dignity in their care. Objective: To examine psychiatric patients’ perspectives on open-door policies and closed-ward treatment. Methods: This study utilized a hybrid questionnaire survey conducted at the University Psychiatric Clinics (UPK) Basel in September 2023, which examined psychiatric service utilization. Key factors from a meta-review, including ward relationships, environment, autonomy, legal status, coercion, perceived care entitlement, and expectations at admission and discharge, were included. A text mining approach using Latent Dirichlet Allocation (LDA), a Bayesian probability-based algorithm, was applied to identify latent topics in the textual data. Results: The final sample included 604 individuals, with a response rate of 19.1%. A significant majority (63.8%) rated open-door treatment as "very important" (10 out of 10 on a Likert scale). In contrast, only 21.0% of participants expressed willingness to accept treatment in locked wards, with 70.4% explicitly rejecting it. Topic modeling revealed key themes of institutional critiques, restriction, confinement, social disconnection, and autonomy as a central demand in patient sentiments regarding closed-ward treatment. Conclusions: Our study underscores the importance of open-door policies in psychiatric care from the patient perspective, highlighting autonomy, trust, and engagement in treatment. Aligning institutional practices with patient priorities may enhance satisfaction and treatment outcomes.
Background: In the context of a sharp rise in help-seeking in youth mental health, digital mental health interventions offer enormous potential to improve outcomes, facilitate access, and meet the inc...
Background: In the context of a sharp rise in help-seeking in youth mental health, digital mental health interventions offer enormous potential to improve outcomes, facilitate access, and meet the increasing demand on mental health services. For young adults attending third level education, for example, digital mental health interventions may support help-seeking students while either waiting to attend student counselling, or to sustain gains or once a brief course of face-to-face counselling sessions have completed. Objective: This trial investigated the feasibility of using a Moderated Online Social Therapy (MOST) intervention comprising of tailored mental health content and therapist, peer, and online community support in third-level students who recently attended a student counselling service. Methods: We conducted a pilot randomised controlled study of third-level students who had recently completed ~4 sessions of counselling. Students were randomly assigned to the intervention or control arm at a rate of 2:1. In the intervention arm, students had access to MOST for 26 weeks and both groups were assessed at baseline, 12 weeks, and 26 weeks with effect sizes calculated between groups. Results: A total of N = 74 were recruited, meeting the recruitment target of ~3.1 participants per semester month. Retention in the trial was 70.3% at 12 weeks, reducing to 66.2% at 26 weeks. For the intervention group, when engagement was measured in terms of participation in at least one component of the intervention, 80.9% of the intervention group engaged for 5 or more weeks of the trial (~20% of the maximum 26 weeks). Based on the effect sizes observed, the intervention arm was associated with modest gains in social function and cognitive function, and reduced clinical symptom severity at 12 weeks. Conclusions: Based on the recruitment, retention, and engagement rates observed, a full randomised controlled trial of MOST with young adults is feasible. Moreover, the effect sizes favouring the intervention arm are consistent with previous studies, and support a full trial of MOST as a potentially beneficial support for youth mental health in further education settings.
Background: Hydronephrosis is a condition characterized by the swelling of one or both kidneys due to urine buildup, often resulting from an obstruction in the urinary tract. In pediatric patients, gr...
Background: Hydronephrosis is a condition characterized by the swelling of one or both kidneys due to urine buildup, often resulting from an obstruction in the urinary tract. In pediatric patients, grading and assessing the severity of hydronephrosis are crucial for determining appropriate treatment and predicting outcomes. Objective: The objective of this study is to systematically review and analyze the use of AI-based models for grading and assessing hydronephrosis severity in pediatric patients, evaluating their methodologies, diagnostic performance, and potential for clinical integration. Methods: This systematic review and meta-analysis will systematically search for studies published up to 1 March 2025 in databases including MEDLINE, Cochrane, IEEE Xplore, Scopus, Google Scholar, and Taylor & Francis, as well as grey literature sources like ProQuest, OpenGrey, and conference proceedings. Eligible studies must involve AI-based models for segmentation, classification, or prediction of hydronephrosis severity in patients aged 0–18 years, utilizing imaging modalities such as ultrasound, CT, or MRI. Studies will be assessed for risk of bias using a modified version of QUADAS-2, and a narrative synthesis will be conducted. If sufficient data homogeneity exists, a meta-analysis will be performed using random-effects models. Results: The study began in March 2025 with completion expected by July 2025. Conclusions: This systematic review and meta-analysis will be the first review to provide insights into the potential of AI in pediatric hydronephrosis assessment, supporting its integration into clinical practice or identifying limitations in its application.
Background: Groundwater contamination poses a significant public health risk, particularly in urban areas with inadequate waste management. Dumpsites serve as major sources of pollutants, including he...
Background: Groundwater contamination poses a significant public health risk, particularly in urban areas with inadequate waste management. Dumpsites serve as major sources of pollutants, including heavy metals, which infiltrate aquifers through leachate migration. Port Harcourt, Nigeria, faces increasing groundwater quality concerns due to the proliferation of uncontrolled waste disposal sites. Objective: This study aims to evaluate the spatial and seasonal variations in groundwater quality around dumpsites in Port Harcourt and determine the suitability of groundwater for drinking based on WQI values. It also seeks to identify contamination patterns and assess the influence of rainfall on pollutant dispersion. Furthermore, the study compares findings with global research to establish broader implications for waste management and public health. By doing so, it provides a scientific basis for policy recommendations aimed at mitigating groundwater pollution. Methods: Groundwater samples were collected from various locations around major dumpsites in Port Harcourt during dry and rainy seasons. Physicochemical parameters, including heavy metal concentrations, were analyzed to compute WQI values. Comparative analysis with previous studies was conducted to validate observed contamination trends. The impact of leachate migration on water quality was assessed using seasonal variations in WQI values. Results: Findings reveal significant spatial and seasonal fluctuations in groundwater quality. While Choba exhibited excellent water quality, Sasun, Olumeni, and Epirikom recorded dangerously high WQI values, indicating unsuitability for drinking. Seasonal variations showed that rainfall exacerbated contamination levels, as seen in Eleme, where WQI increased from 56.362 in the dry season to 140.928 in the rainy season. The study aligns with previous research from India, China, and Ghana, demonstrating that landfill leachates and surface runoff are key contributors to groundwater degradation. Conclusions: The study confirms that dumpsite leachates significantly impact groundwater quality, posing a major risk to public health. The high WQI values in several locations highlight the need for urgent interventions. Findings align with global research on groundwater contamination, emphasizing the critical role of effective waste management in reducing environmental pollution. To mitigate groundwater pollution from dumpsite leachates, it is essential to implement stringent waste management policies that regulate landfill operations and prevent leachate infiltration into aquifers. Establishing continuous groundwater monitoring programs can help detect contamination trends early and guide timely intervention measures. Additionally, promoting alternative potable water sources in highly contaminated areas is crucial to reducing health risks for affected communities. The adoption of modern landfill technologies, such as leachate treatment and containment systems, should be prioritized to minimize pollution and safeguard water resources for future generations. This study contributes to the growing body of research on groundwater contamination by providing empirical evidence of the impact of dumpsites in an urban African setting. The findings underscore the urgent need for improved waste management policies and public health interventions. By aligning with global research, this study reinforces the importance of sustainable environmental practices to safeguard water resources and protect communities from the adverse effects of pollution.
Background: Background: Though contingency management shows its efficacy in substance use disorder treatment, digital contingency management needs more evidence in treating substance use. Objective: T...
Background: Background: Though contingency management shows its efficacy in substance use disorder treatment, digital contingency management needs more evidence in treating substance use. Objective: This study aims to evaluate the effectiveness of digital contingency management in treating substance use disorder by examining two key outcome variables: abstinence and appointment attendance. Methods: A 12-month controlled trial was conducted by enrolling patients into two groups based on the time sequence of program enrollment: one group consenting to participate in the digital contingency management and the other one with no contingency management. Propensity score matching was conducted to match groups on covariates. After matching, t-tests were conducted to examine the difference between groups on urine abstinence and appointment attendance rates. Results: Two cohorts of propensity-matched patients (66 interventions and 59 controls) were analyzed. Abstinence was significantly higher in the digital contingency management group (mean = 0.92, 95% CI: 0.88–0.96) than in the treatment-as-usual group (mean = 0.85, 95% CI: 0.79–0.90; p = 0.01). Appointment attendance also demonstrated significant differences between the groups, with the Digital contingency management group achieving a mean rate of 0.69 (95% CI: 0.65–0.74) compared to 0.50 (95% CI: 0.45–0.55) in the TAU group (p < 0.001). This notable increase highlights the role of Digital contingency management in fostering engagement with care, an essential factor for successful treatment outcomes. Conclusions: The results suggest that digital contingency management can be an effective treatment modality for substance use disorder. Clinical Trial: NA
Background: Radiotherapy is a crucial modality in cancer treatment. In recent years, the rise of short-form video platforms has transformed how the public accesses medical information. TikTok and Bili...
Background: Radiotherapy is a crucial modality in cancer treatment. In recent years, the rise of short-form video platforms has transformed how the public accesses medical information. TikTok and Bilibili, as leading short-video platforms, have emerged as significant channels for disseminating health information. However, there is an urgent need to evaluate the quality and reliability of the information related to radiotherapy available on these platforms. Objective: This study aims to systematically assess the information quality and reliability of radiotherapy-related short-form videos on TikTok and Bilibili platforms using the Global Quality Score (GQS) and a modified DISCERN evaluation tool, thereby elucidating the current landscape and challenges of digital health communication. Methods: This study systematically retrieved the top 100 radiotherapy-related videos on TikTok and Bilibili as of February 25, 2025. The quality of the videos was assessed using the Global Quality Score (GQS, 1-5 points) and a modified DISCERN scoring system (1-5 points). Statistical analyses were conducted using the Mann-Whitney U test, as well as Spearman and Pearson correlation analyses, to ensure the reliability and validity of the results. Results: A total of 200 short-form videos related to radiotherapy were analyzed, revealing that the overall quality of videos on TikTok and Bilibili is unsatisfactory. Specifically, the median Global Quality Score (GQS) for TikTok was 4 (interquartile range [IQR] 3-4), while for Bilibili it was 3 (IQR 3-4). The median modified DISCERN scores for both platforms were 3 (IQR 2-4 and IQR 3-4, respectively).On TikTok, 53% (53/100) of the videos were rated as "good" or higher, whereas 45% (45/100) of the videos on Bilibili were considered "relatively reliable." Videos produced by professionals, institutions, and non-professional organizations had significantly higher DISCERN scores compared to those produced by non-professional individuals, with statistical significance (P<.0001, P<.0001, and P<.01, respectively).Furthermore, the correlations between the number of bookmarks and video duration with DISCERN scores were 0.172 (p = 0.015) and 0.192 (p = 0.007), respectively. However, no video variables were found to effectively predict the overall quality and reliability of the videos. Conclusions: This study revealed that the overall quality of radiotherapy-related videos on TikTok and Bilibili is generally low. However, videos uploaded by professionals demonstrate higher information quality and reliability, providing valuable support for patients seeking guidance on healthcare management and treatment options for tumors. Therefore, improving the quality and reliability of video content, particularly those produced by non-professionals, is crucial for ensuring the public has access to accurate medical information.
Background: Large language models (LLMs) can aid students in mastering a new topic fast but for the educational institutions responsible for assessing and grading the academic level of students it can...
Background: Large language models (LLMs) can aid students in mastering a new topic fast but for the educational institutions responsible for assessing and grading the academic level of students it can be difficult to discern whether a text originates from a student’s own cognition or if it is synthesized by an LLM. Universities have traditionally relied on a submitted written thesis as proof of higher-level learning, on which to grant grades and diplomas. Ubiquitous availability of LLMs challenges this practice. Objective: In this study we assumed the role of hypothetical health science master’s student actors looking to leverage the full power of an LLM in completing scientific research paper manuscripts that could be submitted for master’s thesis graduation. Methods: In an exploratory case-study we used ChatGPT to generate two research papers as conceivable student submissions for master’s thesis graduation from a health science master’s program. One paper simulated a qualitative and another simulated a quantitative research project. Results: Using a stepwise approach we prompted ChatGPT to 1) synthesize two credible datasets, and 2) generate two manuscripts, in less than a day, that—in our judgment—would have been able to pass as credible graduation research papers at the health science master’s program the authors are currently affiliated with. Conclusions: Our demonstration highlights the ease with which an LLM can synthesize research data, conduct scientific analyses, and produce credible research papers for a master’s graduation. To uphold the integrity of academic standards, we recommend master’s programs to prioritize oral examinations and school exams. This shift is crucial to ensure a fair and rigorous assessment of higher-order learning and abilities at the master’s level.
Background: The Kuamsha app was originally developed for a two-arm randomised controlled pilot trial (the DoBAt study), which assessed the feasibility, acceptability, and initial efficacy of a digital...
Background: The Kuamsha app was originally developed for a two-arm randomised controlled pilot trial (the DoBAt study), which assessed the feasibility, acceptability, and initial efficacy of a digital choose-your-own-adventure style serious game that delivered behavioural activation (BA) therapy to adolescents with depression in rural South Africa. Objective: This qualitative study explored the role of generative artificial intelligence (AI) as a novel method of developing engaging, relatable, and relevant digital mental health content for adolescents in rural South Africa. Methods: Through interactive, exploratory workshops and focus group discussions, adolescents compared stories, images, and songs created by generative AI to those in the Kuamsha app, a digital mental health intervention developed without AI. Results: Inductive thematic analysis revealed three themes: ‘Use of generative AI tools in a workshop’, ‘Reflections on the creations and comparison’ and ‘Thinking towards the future’. Adolescents generally showed a preference for AI-generated media compared to the Kuamsha app media, and the creative process was an important aspect of the process for adolescents. Conclusions: This study highlighted current biases of existing generative AI tools while also demonstrating the significant potential of generative AI to enhance ‘real-time’ co-design of digital mental health interventions by incorporating more culturally relevant and personalised content.
Background: Generative Conversational AI (GCAI) holds significant potential within the mental health domain, particularly for adolescents in impoverished regions, as these areas often lack adequate me...
Background: Generative Conversational AI (GCAI) holds significant potential within the mental health domain, particularly for adolescents in impoverished regions, as these areas often lack adequate mental health resources and infrastructure. Objective: To evaluate the prevalence of childhood trauma, mental health issues, and the needs for GCAI among adolescents from impoverished regions, and to explore the potential of GCAI in addressing these needs. Methods: An online survey was conducted among 18,093 adolescents in impoverished regions. Machine learning ensembles, network analysis, and Bayesian networks were used to identify key predictors and underlying relationships between childhood trauma, mental health, and GCAI needs. Results: The analysis revealed that the prevalence of mental health issues was 15.65% (95% CI: 15.12%-16.18%), and the prevalence of GCAI needs was 38.12% (95% CI: 37.41%-38.83%). The findings underscore the potential of GCAI to address mental health needs in impoverished areas, emphasizing the importance of targeting depression, anxiety, and interpersonal sensitivity to mitigate the effects of childhood trauma and other mental health issues. Mental health issues were prevalent in 15.65% (95% CI: 15.12%-16.18%) of adolescents, with 12.94% (95% CI: 12.46%-13.43%) exhibiting mild and 2.71% exhibiting moderate to severe issues. Childhood trauma was reported by 34.46% (95% CI: 33.76%-35.15%) of participants. GCAI needs were expressed by 38.12% (95% CI: 37.41%-38.83%) of adolescents, with 75.18% (95% CI: 73.58%-76.77%) of those with mental health issues and 43.70% (95% CI: 42.46%-44.93%) of those with childhood trauma indicating a need for GCAI support. Machine learning models identified 14 key predictors for GCAI needs, with the integrating glmBoost and RF algorithms showing the highest predictive accuracy in the training set (AUC = .982) and the testing set (AUC = .759). Network analysis revealed depression (EI = 1.15) and anxiety (EI = 1.06) as core nodes, with emotional abuse (BEI = 1.69) and GCAI needs (BEI = 1.61) as bridge nodes. Bayesian network analysis identified emotional instability (strength = -38.74, direction = 0.88), interpersonal sensitivity (strength = -25.41, direction = 0.91), and depression (strength = -18.21, direction = 0.92) as direct causal predictors for GCAI needs. Conclusions: The study highlights the potential of GCAI to address mental health needs in impoverished regions by targeting depression, anxiety, and interpersonal sensitivity. Future work should focus on developing culturally appropriate GCAI interventions to provide accessible psychological support to adolescents in these areas.
Background: Syndromic surveillance now forms an integral part of the surveillance for a wide range of hazards in many countries. Establishing syndromic surveillance systems can be difficult due to the...
Background: Syndromic surveillance now forms an integral part of the surveillance for a wide range of hazards in many countries. Establishing syndromic surveillance systems can be difficult due to the many different sources of data which can be used, cost pressures, the importance of data security and different technologies. Objective: We describe major points in the development of the UK Health Security Agency (UKHSA) English real-time syndromic surveillance service over its first two decades (1998 to 2018) and key wider themes we believe are important in ensuring a sustainable and useful syndromic surveillance service. Methods: We conducted semi-structured interviews with current members of the UKHSA syndromic surveillance team who were involved from the earliest stages and previous senior colleagues who were supportive of the syndromic surveillance work during the early phases. We asked their views about the development of syndromic surveillance, the key drivers and the challenges. Results: Using the results of the discussions and our personal experience of running the syndromic surveillance service from inception and over decades, we summarise our recommendations for establishing and running sustainable syndromic surveillance systems. Conclusions: In this age of increased automation, with the ability to transfer data in real-time and to utilise machine learning and artificial intelligence, we are approaching a ‘new age of syndromic surveillance’. We consider the focus on the public health questions, relationships and collaboration, leadership and true teamwork should not be underestimated in the success of and usefulness of real-time syndromic surveillance systems.
Background: Artificial intelligence (AI) and machine learning (ML) models are frequently developed in medical research to optimize patient care, yet they remain rarely utilized in clinical practice. O...
Background: Artificial intelligence (AI) and machine learning (ML) models are frequently developed in medical research to optimize patient care, yet they remain rarely utilized in clinical practice. Objective: The present study aims to understand the disconnect between model development and implementation by surveying physicians of all specialties across the United States. Methods: A HIPAA-compliant survey was emailed to residency coordinators at ACGME-accredited residency programs to distribute among attending physicians and resident physicians affiliated with their institution. Respondents were asked to identify and quantify the extent of their training, specialization, and the type and location of their practice. Physicians were then asked follow-up questions regarding AI in their practice: whether its use is permitted, whether they would use it if made available, primary reasons for using or not using AI, elements that would encourage its use, and ethical concerns. Results: Of the 941 physicians who responded to the survey, 384 (40.8%) were attending physicians, and 557 (59.2%) were resident physicians. The majority (81.9%) of physicians indicated they would adopt AI in clinical practice if given the opportunity. The most cited intended uses for AI were risk stratification, image segmentation/image analysis, and disease prognosis. The most common reservation, cited by the 18.1% of physicians who indicated that they would not use AI even if it were clinically accessible, was the potential to replicate human bias. Conclusions: The present study emphasizes that most academic physicians within the United States are open to adopting AI in their clinical practice. For AI to become clinically relevant, however, developers and physicians must work synergistically to design models that are accurate, accessible, and intuitive while thoroughly addressing ethical concerns associated with the implementation of AI into medicine.
Background: Strokes present a unique challenge to the healthcare system. The utilization of digital technologies offers the potential for increased efficiency and effectiveness. However, patient accep...
Background: Strokes present a unique challenge to the healthcare system. The utilization of digital technologies offers the potential for increased efficiency and effectiveness. However, patient acceptance plays a crucial role in the successful adoption of these innovations. Objective: This study aimed to systematically analyze patient preferences and acceptance of digital technologies in rehabilitation, particularly neurorehabilitation. Methods: A discrete choice experiment (DCE) embedded in an online-survey was conducted, involving a total of 1259 respondents. The DCE encompassed five technical aspects: (1) explanation and presentation of therapy exercise, (2) information in therapy, (3) contact to healthcare professionals, (4) patients’ choice in therapy process, and (5) data processing. Furthermore, (6) therapy success within 6 months and a cost-attribute (7) copayment per month were added. A fractional-factorial design was used. The collected data were analyzed using a mixed logit model, and willingness-to-pay and uptake probabilities were calculated. Results: The analysis revealed that therapy success within 6 months was the most influential criterion, followed by the monthly copayment cost. However, the results indicated that technical aspects, in addition to clinical and cost factors, significantly influenced the probability of uptake. Three alternatives were modelled to calculate predicted uptake probabilities: a prototype of a robot, further development of a robot, and a digital application. Differences in predicted uptake probabilities were observed (44% vs. 65% vs. 84%) due to variations in technical aspects. Conclusions: This study demonstrates a systematic approach to understanding the acceptance of healthcare interventions, particularly in the context of digital technologies for neurorehabilitation. These findings can aid in predicting the acceptance of interventions, allowing for targeted measures to positively influence acceptance. Clinical Trial: The preference survey instruments, the informed consent form, and the study design were reviewed and approved by the ethics committee of Hochschule Neubrandenburg (HSNB/177/21).
Background: Compared to conventional outpatient or telephone follow-up, the introduction and use of eHealth technologies provide novel opportunities for enhancing medication adherence in transplant re...
Background: Compared to conventional outpatient or telephone follow-up, the introduction and use of eHealth technologies provide novel opportunities for enhancing medication adherence in transplant recipients. Nonetheless, the efficacy of eHealth treatments regarding medication adherence in kidney transplant recipients remains ambiguous. Objective: To assess the impact of eHealth interventions on medication adherence in kidney transplant recipients and to understand the underlying factors. Methods: Seven databases (PubMed, Web of Science, Cochrane, Embase, CINAHL, SCOPUS,and OVID) were thoroughly checked from the beginning till November 2024. Each study was assessed for bias using the Cochrane Risk of Bias tool (RoB 2), and the certainty of the evidence for each outcome of interest was rated using the GRADE criteria.The study outcomes were evaluated using a narrative synthesis and a meta-analysis. Results: A total of 12 studies involving 897 kidney transplant recipients were included. The eHealth intervention improved medication adherence monitored by electronic devices compared with the control group [RR=1.46, 95% CI (1.11, 1.90)]. However, the differences in adherence to medication as assessed by the Basel Immunosuppressive Medication Adherence Scale [RR=0.98, 95% CI (0.85, 1.13)], tacrolimus blood concentration [MD=0.15, 95% CI (-0.21, 0.51)], and intra-patient variability of tacrolimus [MD=-0.02, 95% CI (-0.07, 0.03)] were not statistically significant. The overall risk of bias was very high or some concerns, and the evidence for all outcomes was of low quality. Conclusions: The true effectiveness of eHealth interventions is affected by a variety of confounding factors, and more high-quality future studies are still needed to optimise eHealth intervention strategies and clarify their effectiveness in improving medication adherence. Clinical Trial: PROSPERO CRD42025640638; https://www.crd.york.ac.uk/PROSPERO/myprospero
Choosing and matching into one's preferred specialty is one of the most important and potentially stressful concerns medical students face. Many factors play a role in specialty chose such as prior li...
Choosing and matching into one's preferred specialty is one of the most important and potentially stressful concerns medical students face. Many factors play a role in specialty chose such as prior life experiences, lifestyle, and how well a medical student feels they fit in with the personalities of said specialty. This study examines the relationship between neuroticism and specialty choice among medical students at a single U.S. allopathic medical school using the Big Five Inventory and proprietary questions to quantify neurotic personality traits and specialty perceptions in current US medical students. Respondents interested in surgical specialties believed that surgical specialties were more competitive than medical specialties, but were not found to be more neurotic than their medicine counterparts. Additionally, those whose interested included both surgical and medical specialties were found to be less neurotic than their peers. The results of this study indicate that medical students are already cognizant and concerned about matching into their desired specialties in their preclinical potion of their medical education.
Background: Chemistry education relies heavily on experimentation to bridge theoretical concepts with practical applications. However, universities often face challenges in providing real laboratory e...
Background: Chemistry education relies heavily on experimentation to bridge theoretical concepts with practical applications. However, universities often face challenges in providing real laboratory experiences due to resource limitations, equipment shortages, and logistical constraints. Virtual laboratories have emerged as a promising alternative, offering interactive, computer-based simulations that replicate real lab experiments and enhance learning. Objective: This study investigates the perceived benefits and challenges of implementing virtual laboratories in chemistry education at selected universities in Southern Ethiopia, assessing their effectiveness as a teaching and learning tool. Methods: An explanatory sequential mixed-method design was employed to provide a comprehensive analysis. Quantitative data were collected from 63 chemistry instructors and 143 undergraduate students using structured questionnaires, while qualitative insights were obtained through interviews. Descriptive statistics were used to analyze numerical data, and thematic coding was applied to categorize qualitative responses. Results: The findings indicate that virtual laboratories significantly enhance chemistry education by improving academic achievement and conceptual understanding, particularly in grasping key concepts and complex topics (average mean score: 3.9). They also contribute to the development of essential scientific skills, such as hypothesis formulation, problem-solving abilities, and effective lab report writing (average mean score: 3.8). Additionally, virtual labs offer flexibility in learning by supporting self-paced education and serving as viable alternatives when access to real laboratories is limited (average mean score: 3.8). However, despite these advantages, several challenges were identified. Limited technical expertise (kappa = 0.63), high software costs (kappa = 0.61), difficulties in understanding specific concepts required for virtual experiments (kappa = 0.61), and the absence of engaging virtual lab software (kappa = 0.51) were among the primary obstacles. Furthermore, a lack of preparedness to address real laboratory challenges (kappa = 0.23) and infrastructural limitations, such as insufficient computer facilities (kappa = 0.25), further hinder the effective implementation of virtual laboratories. Conclusions: The study underscores the transformative potential of virtual laboratories in chemistry education, serving as viable alternatives to traditional lab instruction. However, their successful implementation requires addressing existing challenges, such as improving digital infrastructure, providing instructor training, and enhancing accessibility. Universities should consider integrating virtual laboratories alongside real labs to optimize learning outcomes and foster technologically advanced educational environments.
Digital-based health interventions (DHIs), defined as health services delivered electronically, have proven to be effective tools for health promotion. However, user retention remains low, an outcome...
Digital-based health interventions (DHIs), defined as health services delivered electronically, have proven to be effective tools for health promotion. However, user retention remains low, an outcome predicted by insufficient integration of socio-cultural determinants and limited user engagement. This study explores participatory animation (PA), a methodology involving community partnerships in creating animated content as a strategy to mitigate retention rates. PA is a multi-step production process capable of creating engaging and efficacious multimedia deployable stimuli while leveraging a co-creation process through participants’ oral and visual design assessment. However, this method has been historically underutilized in health scholarship. The urgent need to develop effective DHIs emphasizes the promise of PA as a methodological frontier. This paper offers a perspective on PA’s practical and theoretical potential to improve digital intervention design and function from existing literature.
Background: Up to 92% of individuals with eating disorders (EDs) report engaging in body checking behaviors (e.g., repeated self-weighing and pinching of various body parts) to assess their weight and...
Background: Up to 92% of individuals with eating disorders (EDs) report engaging in body checking behaviors (e.g., repeated self-weighing and pinching of various body parts) to assess their weight and shape. These behaviors contribute to increased body dissatisfaction, negative affect, and dietary restriction, thereby maintaining ED symptomology. Objective: The present study aims to first characterize the types and frequency of body checking behaviors (e.g., self-weighing, pinching) reported among adolescent girls with binge-spectrum EDs during a 21-day ecological momentary assessment (EMA) protocol. The second aim is to explore the prospective associations between body checking and cognitive and behavioral ED symptoms, namely body dissatisfaction, fear of weight gain, dietary restraint, dietary restriction, compensatory behaviors, and binge eating. The third aim is to assess whether body checking behaviors show reactive effects (i.e., produces change in the behavior subject to monitoring) to EMA, such that they decrease over time. Methods: The study will recruit 70 adolescent girls aged 14-19 years with clinically significant binge eating. Participants will complete a semi-structured interview and a series of self-report measures at baseline to assess ED pathology. Then, participants will complete five daily EMA surveys to track body checking behaviors and related ED symptoms over 21 days. Results: Recruitment is set to begin in January 2025, with data collection expected to conclude in March 2026. Conclusions: This study will provide insights into the patterns and impacts of body checking behaviors among adolescent girls with binge-spectrum EDs. If body checking behaviors reduce in response to EMA, digital self-monitoring could be a scalable and cost-effective strategy for ED treatment and prevention. The findings may also inform the development of momentary interventions targeting body checking behaviors to mitigate ED symptoms. Future research should extend these observations over longer periods and include male participants to generalize findings across genders. Clinical Trial: Not applicable.
Background: Metabolic disease is increasingly impacting women of reproductive age. In pregnancy, uncontrolled metabolic disease can result in offspring with major congenital anomalies, preterm birth,...
Background: Metabolic disease is increasingly impacting women of reproductive age. In pregnancy, uncontrolled metabolic disease can result in offspring with major congenital anomalies, preterm birth, and abnormal fetal growth. Pregnancy also accelerates the complications of metabolic diseases in mothers resulting in an increased risk of premature cardiovascular events. Despite the convincing evidence that pre-conception care can largely mitigate the risks of metabolic disease in pregnancy, there are few data about how to identify the highest risk women to enable them to be connected with appropriate pre-conception care services. Objective: Despite the convincing evidence that pre-conception care can largely mitigate the risks of metabolic disease in pregnancy, there are few data about how to identify the highest risk women to enable them to be connected with appropriate pre-conception care services. The aim of the study is to determine the maternal phenotype that represents the highest risk of having adverse neonatal and maternal pregnancy outcomes. Methods: This will be a prospective cohort study of 500 women recruited in early pregnancy. The primary outcome is a composite of offspring born small or large for gestational age (customized birthweight ≤10th and ≥90th centile for gestational age). Secondary outcomes are (1) composite of adverse neonatal birth outcomes (SGA, LGA, major congenital abnormalities, preterm birth (<37 weeks’ gestation)) and (2) composite of new maternal metabolic outcomes (gestational diabetes, diabetes in pregnancy, Type 2 Diabetes or pre-diabetes; gestational hypertension, preeclampsia, eclampsia or new essential hypertension after pregnancy; and gestational weight gain ≥20kg or new overweight/obesity at the 12–18-months post-partum visit). Results: A multivariable logistic regression analysis will be conducted to identify candidate predicators of poor pregnancy outcomes due to metabolic disease. From this model, model coefficients and the associated 95% confidence intervals (CI) will be extracted to derive the risk score for predicting the delivery of LGA/SGA offspring (primary outcome) and composites of adverse neonatal outcomes and maternal outcomes (secondary outcomes). Conclusions: The study has been approved by the institutional Human Research Ethics Committee (HREC/90080/MH-2022). Findings will be disseminated through peer reviewed publications and conference presentations, and national and international networks involved in maternity care in high-risk populations. Clinical Trial: This study has been prospective registered with the Australian New Zealand Clinical Trials Registry (ACTRN12623000037606).
Background: Tocotrienol, a naturally occurring form of vitamin E, has been extensively studied for its potent antioxidant, anti-inflammatory, and immune-stimulating properties. However, the clinical i...
Background: Tocotrienol, a naturally occurring form of vitamin E, has been extensively studied for its potent antioxidant, anti-inflammatory, and immune-stimulating properties. However, the clinical impact of tocotrienol supplementation on older adults' overall health and well-being remains relatively unexplored. Objective: This research aims to investigate the efficacy of tocotrienol-rich fraction (TRF), on various health parameters associated with general well-being in individuals aged between 50-75 years Methods: The present study is a randomised, double-blind, placebo-controlled trial designed to investigate the effectiveness of TRF supplementation on overall health in healthy elderly individuals. The study aims to assess the impact of a daily dosage of 200mg of TRF over a period of 6 months. A total of 220 participants are enrolled in the study, with half receiving the placebo and the other half receiving TRF supplementation. The study comprises three endpoints: baseline, 3 months, and 6 months. At each endpoint, various measurements are taken to evaluate different aspects of health. These measurements include blood biochemistry assessments such as liver function tests, renal profile, lipid profile, and full blood count. Oxidative stress markers, including malondialdehyde, advanced glycation end-products, protein carbonyl, and isoprostane, are also evaluated. Immune response markers such as interleukin-6 and tumour necrosis factor-alpha are assessed. Satiety regulation is examined through measurements of leptin and ghrelin. Body composition and skin health parameters, including wrinkling, pigmentation, elasticity, hydration, and sebum secretion, are evaluated. Additionally, arterial stiffness is assessed by arteriography, bone mineral density is measured using dual x-ray absorptiometry, and cognitive function is assessed through the Montreal Cognitive Assessment, Rey Auditory Verbal Learning Test, and digital span, are measured at baseline and at the 6-month endpoint. Results: NIL. Conclusions: By comprehensively evaluating these health aspects, this study seeks to provide valuable insights into the potential benefits of tocotrienol supplementation for promoting the overall health and well-being of the ageing population. Clinical Trial: National Medical Research Register (NMRR), no. NMRR19-2972-51179
Background: Acute decompensated heart failure (ADHF) decompensation is closely associated with pulmonary congestion (PC), which triggers abnormal breathing patterns. Early detection of PC-driven respi...
Background: Acute decompensated heart failure (ADHF) decompensation is closely associated with pulmonary congestion (PC), which triggers abnormal breathing patterns. Early detection of PC-driven respiratory changes via wearable devices could enable timely intervention and reduce hospitalizations. However, specific respiratory features linked to PC remain unclear. Objective: This study employed a wearable device to analyze nocturnal respiratory signals in hospitalized ADHF patients, comparing those with and without PC. Methods: This prospective trial investigated breathing pattern characteristics in ADHF patients hospitalized. PC was assessed via lung ultrasound (LUS) in 28 standardized zones at admission, with patients stratified by LUS-defined severityusing >5 B-lines as the threshold for significant PC. Concurrently, wearable devices were deployed to continuously capture chest and abdominal movement signals for respiratory waveform analysis. Breathing patterns were quantitatively characterized through three dimensions: respiratory cycle, respiratory amplitude and multiscale entropy (MSE). Logistic regression analysis and the receiver operating characteristic (ROC) curve were used to identify risk factors associated with PC and evaluate the ability of respiratory pattern parameters to identify HF combined with PC. Results: A total of 62 patients with ADHF were included in the study, 44 of whom had more than 5 B-lines. PC patients exhibited longer mean expiratory time (TE_mean), smaller mean ratio of expiratory time (TE_ratio_mean), and greater MSE values in respiratory amplitude (RA). RA_area_1_5 and RA_area_6_20 were identified as risk factors for PC after adjusting for clinical variables. The established logistic regression model could accurately distinguish whether HF patients complicated with PC, the AUC of the multivariable model constructed using respiratory complexity parameters was 0.910(95% CI: 0.837~0.984, P<.001). Conclusions: The study highlights the potential of wearable devices combined with MSE algorithms for monitoring of ADHF patients' respiratory complexity. The identified respiratory complexity parameters, particularly RA_area_1_5 and RA_area_6_20, could serve as an early warning tool for PC exacerbation.
Background: Mindfulness-based apps can be an effective and accessible resource to mental health support. However, little is known about their use outside of research settings and what user characteris...
Background: Mindfulness-based apps can be an effective and accessible resource to mental health support. However, little is known about their use outside of research settings and what user characteristics relate to app use. Objective: This study aimed to examine the characteristics of people who decided to use, not use, or stop using Headspace within the context of a large-scale public deployment, which offered the mindfulness meditation app Headspace as a free mental health resource to community members. Methods: Nearly 100,000 community members received Headspace. All members (N = 92,311) received an email inviting them to complete a voluntary and unpaid survey. Participants (n = 2,725) completed the survey. The 20-minute survey asked about use of Headspace, user experience, mental health problems, mental health stigma, and mental health utilization. Logistic regression models were used to examine relationships between predictors and non-use, past use, or current use of Headspace. Results: Participants who were still using Headspace at the time of completing the survey (76%) were more likely to experience mental health challenges, distress, and made more use of other digital mental health resources (i.e., online tools and connecting with people online) than people who were not using Headspace. Additionally, current Headspace users rated the app higher on user experience compared to past users. The most common reasons for abandoning Headspace were that people were already using other strategies to support their mental health (35%), no longer needed Headspace (13%), and/or did not think Headspace was useful (8%). Conclusions: Results indicate that a person’s mental health challenges, a perceived need for support, and familiarity with digital resources were associated with continued Headspace use. While the most common reason for not using Headspace was that people were already using other resources, it is important to consider the continuity of mental health support beyond these free programs for those who may not have easy access to other resources. We discuss potential implications of our findings for offering and using apps such as Headspace as a mental health resource, along with factors that influence engagement with this app.
Background: OpenNotes allows patients to access their electronic health record (EHR) notes through online patient portals. However, EHR notes contain abundant medical jargon, which can be difficult fo...
Background: OpenNotes allows patients to access their electronic health record (EHR) notes through online patient portals. However, EHR notes contain abundant medical jargon, which can be difficult for patients to comprehend. One way to improve comprehension is by reducing information overload and helping patients focus on the medical terms that matter most to them. Objective: In this study, we evaluated both closed-source and open-source Large Language Models (LLMs) for extracting and prioritizing medical jargon from EHR notes relevant to individual patients, leveraging prompting techniques, fine-tuning, and data augmentation. Methods: We evaluated the performance of closed-source and open-source LLMs on a dataset of 106 expert-annotated EHR notes. We tested various combinations of settings, including: i) general and structured prompts, ii) zero-shot and few-shot prompting, iii) fine-tuning, and iv) data augmentation. To enhance the extraction and prioritization capabilities of open-source models in low-resource settings, we applied data augmentation using ChatGPT and integrated a ranking technique to refine the training process. Additionally, to measure the impact of dataset size, we fine-tuned the models by incrementally increasing the size of the augmented dataset from 10 to 10,000 and tested their performance. The effectiveness of the models was assessed using 5-fold cross-validation, providing a comprehensive evaluation across various settings. We report the F1 score and Mean Reciprocal Rank (MRR) for performance evaluation. Results: Among the compared strategies, fine-tuning and data augmentation generally demonstrated higher performance than other approaches. Although the highest F1 score of 0.433 was achieved by GPT-4 Turbo, the highest MRR score of 0.746 was observed with Mistral7B when data augmentation was applied. Notably, by using f ine-tuning or data augmentation, open-source models were able to outperform closed-source models. Additionally, achieving the highest F1 score did not always correspond to the highest MRR score. We analyzed our experiment from several perspectives. First, few-shot prompting showed an advantage over zero-shot prompting in vanilla models. Second, when comparing general and structured prompts, each model exhibited different preferences. Third, finetuning improved zero-shot performance but sometimes degraded few-shot performance. Lastly, data augmentation yielded performance comparable to or even surpassing that of other strategies. Conclusions: The evaluation of both closed-source and open-source LLMs highlighted the effectiveness of prompting strategies, fine-tuning, and data augmentation in enhancing model performance in low-resource scenarios.
Background: College students commonly struggle with procrastination, which is linked to mental health complaints and poor academic performance. Guided e-health interventions can be effective in reduci...
Background: College students commonly struggle with procrastination, which is linked to mental health complaints and poor academic performance. Guided e-health interventions can be effective in reducing procrastination. Objective: This study aims to examine the feasibility and acceptability of a new e-health intervention targeting procrastination for college students ('GetStarted') with guidance by student e-coaches. Methods: We conducted a single-arm study. Primary outcomes are satisfaction (CSQ-8), usability, (SUS-10) and adherence (completion rate). Secondary outcomes are changes to procrastination (IPS), depression (PHQ-9), stress (PSS-10), quality of life (MHQoL) and e-coaching satisfaction (WAI-I). Results: Of 734 participants that started the intervention, 335 (45.6%) completed the post-test. Students report being satisfied with the intervention (CSQ-8 M= 23.48; SD = 3,.23) and find it very usable (SUS-10 M = 34.39; SD = 4.52). Regarding adherence, 36.65% participants completed the intervention. On average participants completed 68.95% of the intervention. Participants showed a significant decrease in procrastination, depression and stress as well as an increase in quality of life from baseline to post-test to follow-up. Conclusions: The internet-based, student-guided intervention 'GetStarted' targeting procrastination appears to be acceptable and feasible for college students in the Netherlands.
Background: The growing global cancer burden impacts young individuals as early-onset rates rise. Advances in treatment improve survival but often harm reproductive function. Fertility preservation (F...
Background: The growing global cancer burden impacts young individuals as early-onset rates rise. Advances in treatment improve survival but often harm reproductive function. Fertility preservation (FP) techniques are crucial, yet research mainly emphasizes patients’ views. Objective: This study explores the roles of healthcare professionals (HCPs), barriers, and facilitators in FP decision-making. Methods: PubMed, Cochrane, Embase, APA PsycINFO, CNKI, and WANFANG were searched for eligible qualitative and mixed methods studies up to September 2024. The included studies were evaluated using the qualitative research quality assessment criteria from the Joanna Briggs Institute (JBI) in Australia. A framework synthesis of barriers and facilitators to implementation was conducted using the Consolidated Framework for Implementation Research (CFIR). The synthesized results were assessed using the Confidence in the Qualitative Evidence (ConQual) approach. Results: Overall, 19 studies were included in the qualitative synthesis. Barriers and facilitators to implementation were identified in all 5 CFIR domains: fertility preservation decisions domain, outer setting domain, inner setting domain, individuals domain, and implementation process domain. Conclusions: To address barriers and better meet patient needs, it is essential to develop practice guidelines, provide policy and legal support, ensure adequate human and financial resources, offer targeted education and training for HCPs, and establish standardized follow-up procedures for cancer patients to facilitate access to FP. Clinical Trial: The protocol was registered with PROSPERO (CRD42024600847).
Background: Exercise is considered an important component of lifestyle management for T2DM. Outdoor exercise has been shown to enhance individuals' perception of their overall health status. Objective...
Background: Exercise is considered an important component of lifestyle management for T2DM. Outdoor exercise has been shown to enhance individuals' perception of their overall health status. Objective: This study aimed to explore the role of outdoor exercise on blood glucose management and sleep quality in patients with T2DM. Methods: The study was a randomized controlled trial in which participants were randomized (1:1) to indoor or outdoor training group for 12 weeks. The outdoor Tai Chi group classes are conducted in an open and flat park surrounded mostly by trees. As for the indoor environment, it is a dance studio with a constant temperature maintained at 22.2°C. Visual contact with the outdoors is possible through windows facing the road. The enrolled population performed aerobic 24-style Tai Chi exercise three times a week, lasting 60–90 minutes with 15 minutes of warm-up and cool-down included. During the 12-week long program study, all participants had sensors from the Guardian Sensors 3 continuous glucose monitoring (CGM) system implanted subcutaneously in their upper arms and paired with the Medtronic Guardian Connect CGM device. The sleep quality and quantity were assessed before and after the training program using the Pittsburgh Sleep Quality Index (PSQI) scale and sleep monitoring bracelet.Each patient also simultaneously wore a sunshine duration monitoring bracelet, through which daily sunshine duration was counted. The primary outcome was the absolute change in HbA1c levels within and between the two groups at 12 weeks. The secondary outcomes were changes in BMI, waist circumference, the time in range(TIR) measured by CGM, plasma glucose levels, daily sunshine duration, total sleep time (minutes slept between bedtime and wake time), sleep efficiency (percentage of time asleep while in bed), and wake after sleep onset (minutes awake between sleep onset and wake time). Results: When comparing the primary outcomes at 12 weeks, we found a significant difference in the outdoor and indoor Tai Chi groups in HbA1c. The interaction effect was significant (p = 0.012). The outdoor Tai Chi group showed significant improvement in PSQI, Total sleep time, and Sleep efficiency, and the indoor Tai Chi group showed insignificant changes. Still, the time effect and group effect were significant. Conclusions: In summary, this study demonstrated that outdoor Tai Chi significantly improves waist circumference, BMI, blood glucose levels, sleep quality and sun exposure. In contrast, indoor Tai Chi was relatively weak but also provided some health benefits.
The development of confident and assertiveness physicians through the process of medical education is essential for effective patient care. Through medical training, future physicians obtain the knowl...
The development of confident and assertiveness physicians through the process of medical education is essential for effective patient care. Through medical training, future physicians obtain the knowledge and skillset necessary to accomplish this, but they may face stressors that negatively impact their mental health. This study aims to provide insights into the relationship between US medical student assertiveness and anxiety in the current pass/fail USMLE Step 1 medical education landscape. This was achieved by surveying 30 US MD student at a single Californian institution using the Simple Rathus Assertiveness Scale-Short Form and the General Anxiety Disorder Assessment 7. It was found that M1 participants were more likely to feel uncomfortable returning purchases than their M2 counterparts and that female participants were more likely to ask a loud theater couple to be quiet compared to males. These differences in responses by academic year and gender indicate areas of future study, particularly regarding the personality characteristics of current medical students and whether there are changing trends in medical student assertiveness and its association with medical student well-being.
Background: Integrating evidence-based approaches in healthcare and artificial intelligence (AI) is crucial for enhancing clinical decision-making and patient safety. Yesil o1 Pro is a specialized lar...
Background: Integrating evidence-based approaches in healthcare and artificial intelligence (AI) is crucial for enhancing clinical decision-making and patient safety. Yesil o1 Pro is a specialized large language model (LLM) designed to transform medical knowledge synthesis by leveraging a comprehensive, curated database of scientific literature, clinical guidelines, and medical textbooks. Objective: The system's innovative "AI Hospital" framework employs domain-specific expert agents coordinated by a central Master Agent, enabling tailored and precise medical responses across multiple disciplines. Methods: The model's advanced methodology incorporates sophisticated techniques including GraphRAG-based retrieval, extensive fine-tuning with 1.5 million question-answer pairs, and Chain of Thought (CoT) reasoning. Its robust training dataset comprises 100.5M words from high-impact journals, 96.8M words from core medical texts, and 74.4M words from international standards, ensuring a comprehensive and authoritative knowledge base. Results: Benchmark evaluations demonstrate Yesil o1 Pro's exceptional performance, achieving an overall accuracy of 89.1% and surpassing leading models like GPT-4o (83.9%) and Claude 3.5 Sonnet (83.0%). Domain-specific accuracies are particularly impressive, with 96.1% in Mental Health, 94.6% in Epidemiology, and 94.6% in Dentistry, highlighting the model's proficiency in handling complex, reasoning-intensive medical queries. Conclusions: The model shows promising applications in clinical decision support, interdisciplinary collaboration, professional education, and medical research. While challenges remain in real-world integration and maintaining alignment with evolving medical knowledge, Yesil o1 Pro represents a significant advancement in AI-driven healthcare support.
Future research will focus on validating the model's utility in clinical environments and developing strategies for seamless healthcare system integration.
Background: Integration of electronic health records (EHRs) into clinical research offers numerous opportunities for advancing healthcare delivery and patient outcomes, particularly in the era of mach...
Background: Integration of electronic health records (EHRs) into clinical research offers numerous opportunities for advancing healthcare delivery and patient outcomes, particularly in the era of machine learning (ML). However, EHR data needs to be coded accurately to ensure that models are learning correct representations of diseases. Objective: This study examines the accuracy of gestational diabetes mellitus (GDM) diagnoses in EHRs compared with a clinical team database (CTD) and their impact on ML models. Methods: EHRs from 2018-2022 were validated against CTD data to identify true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). Logistic regression (LR) models were trained and tested using both EHR and validated labels, whereafter simulated label noise was introduced to increase FP and FN rates. Model performance was assessed using Receiver Operating Characteristic Area Under the Curve (ROC-AUC) and average precision (AP). Results: Among 3,952 patients, 3,388 (85.7%) were correctly identified with GDM in both databases, while 564 cases lacked a GDM label in EHRs and 771 were missing a corresponding CTD label. Overall, 87.5% of cases were TN, 9.0% TP, 2.0% FP, and 1.5% FN. The model trained and tested with validated labels achieved a ROC-AUC of 0.817 and an AP of 0.450, whereas the same model tested using EHR labels achieved 0.814 and 0.395, respectively. Increased label noise during training led to gradual declines in ROC-AUC and AP, while noise in the test set -- especially elevated FP rates -- resulted in marked performance drops. Conclusions: Discrepancies between EHR and CTD diagnoses had limited impact on model training but significantly affected performance evaluation when present in the test set, emphasising the importance of accurate data validation.
Background: Although the training course of electrocardiogram (ECG) interpretation was started early in medical school, the accuracy in interpretation of 12-lead ECG is always a challenge issue. We co...
Background: Although the training course of electrocardiogram (ECG) interpretation was started early in medical school, the accuracy in interpretation of 12-lead ECG is always a challenge issue. We conducted a pilot teaching program for comparing the effectiveness of conventional didactic lecture and self- drawing after flipped classroom (SDFC). Objective: The purpose of this study is to evaluate the effectiveness of self-drawing electrocardiograms and the impact of incorporating a flipped classroom approach in optimizing ECG teaching Methods: This study was conducted by postgraduate-year (PGY)-I residents at MacKay Memorial Hospital over three years. The study enrolled 76 PGY-I residents, who were randomized into three groups: conventional control group (group 1), self-drawing group (group 2) and SDFC group (group 3). All participants were provided with the same learning material and didactic lectures.
Knowledge evaluation was performed using pre-tests and post-tests, and self-evaluation of competence and behavior change and program evaluations were conducted using questionnaires. Results: The feedback was positive, emphasizing the benefits of SDFC in combining theory and practical steps to approach ECG reading. The results of the written summative examination item were better and excellent in the SDFC group. Conclusions: Our study demonstrated the promising effects of SDFC on the recognition of ECG presentations, which could make up for the inadequacies of traditional classroom teaching. It can be incorporated into routine teaching if proven successful in a larger cohort. Clinical Trial: N/A
The increasing application of Health Technology (HealthTech) in educational settings, particularly for monitoring students’ mental health, has garnered significant attention. These technologies, whi...
The increasing application of Health Technology (HealthTech) in educational settings, particularly for monitoring students’ mental health, has garnered significant attention. These technologies, which range from wearable devices to digital mental health screenings, offer new opportunities for enhancing student well-being and strengthening support systems. Numerous studies have explored the ethical, legal, and social implications (ELSI) of HealthTech in the field of psychiatry, highlighting its potential benefits while also acknowledging the inherent complexities and risks that demand careful consideration. However, the ELSI related to the use of HealthTech in educational settings remains largely overlooked and insufficiently addressed. This study provides an overview of items that should be considered by researchers, teachers, and education boards or committees to promote HealthTech in the educational context. By adapting existing ELSI frameworks from educational technology and digital health, this study systematically reviews ethical concerns surrounding HealthTech in schools. Expert consultations were conducted through a project consisting of members with expertise related to HealthTech, including developers, a teacher, a school counselor, and university researchers, leading to the identification of 52 ELSI concerns categorized into eight domains: consent, rights and privacy, algorithms, information management, evaluation, utilization, role of public institutions, and relationships with private companies. Using Japan as a case study, we examine regulatory and cultural factors affecting HealthTech adoption in schools. The findings reveal critical challenges such as ensuring informed consent for minors, protecting student privacy, preventing biased algorithmic decision-making, and maintaining transparency in data management. Additionally, institutional factors, including the role of public education policies and private sector involvement, shape the ethical landscape of HealthTech implementation. The study underscores the need for a multi-faceted approach to mitigate risks such as data misuse, inequitable access, and algorithmic bias, ensuring the ethical and effective use of HealthTech in education. The fundamental ELSI framework for HealthTech, including privacy, consent, and algorithmic, can be applied to educational systems worldwide, while aspects related to public education policies should be considered in accordance with the specific context of each country and culture. Incorporating HealthTech into educational system helps address the barriers associated with traditional approaches, including limited resources, cost constraints, and logistical challenges. University and HealthTech company researchers, educators, and stakeholders should ensure that HealthTech projects consider diverse ELSI concerns at every stage before and during implementation.
Background: Photoplethysmography (PPG) is an optical technique for monitoring cardiovascular activity by measuring blood volume variations in tissue beds. PPG signals contain valuable information abou...
Background: Photoplethysmography (PPG) is an optical technique for monitoring cardiovascular activity by measuring blood volume variations in tissue beds. PPG signals contain valuable information about heart rate variability, respiratory rate, among other vital signs. Many of those metrics are derived from the interbeat interval (IBI), which is the time-difference between consecutive systolic peaks. Hence, distinguishing whether a detected PPG peak is a valid systolic peak is an essential task. Kolmogorov-Arnold Networks (KANs) are a recently proposed class of deep neural networks capable of extracting complex patterns and features from time-series data, which have not yet been evaluated for this kind of problem. Objective: This paper aims to apply KANs to the classification problem of systolic peaks in PPG signals. The goal is to find the architecture that minimizes the cross-entropy in this application and compare it to benchmarks such as multilayer perceptrons (MLPs). The scenarios focus on model restrictions common in wearable technologies. Methods: Five datasets, consisting of four pulse-PPG and one finger-PPG, were used for model training and testing. Random hyperparameter search was applied to the models, establishing the range of 1-3 hidden layers and 2-50 neurons per layer, to consider wearable limitations. A comparison with classical classifiers such as Linear Discriminant, Gradient Boosting and K-Neighbors is performed. Finally, the KAN method is compared to state-of-art rule-based PPG peak detectors: Elgendi, HeartPy, Li, and Seongsil. Results: Across all datasets, KANs generally outperformed MLPs for number of hidden layers n=1, exhibiting higher mean accuracies (range: 0.9185-0.9806 vs. 0.9177-0.9799), F1 scores (0.9451-0.9899 vs. 0.9447-0.9896), and AUC (0.8013-0.9635 vs. 0.7984-0.9621). At n=2, minor discrepancies appeared mainly at the fourth decimal point. Meanwhile, MLPs regained advantage at n=3, showing enhanced mean accuracies surpassing KANs in four out of five datasets (accuracy range: 0.9188-0.9808 vs. 0.9090-0.9803). Three datasets witnessed KAN outperforming classic classifiers, while MLP excelled in four instances. Evaluating against rule-based methods confirmed improvements when adopting KAN, showing best performance in three datasets, while HeartPy showed top-tier results in the remaining two. Conclusions: The obtained results indicate the effectiveness of KANs in accurately identifying and classifying systolic peaks, with performance superior to MLPs in scenarios of few number of layers, a restriction often imposed to wearable solutions. It also shows the best performance compared to traditional classifiers and rule-based peak detectors for most tested devices. Clinical Trial: CAAE 07327119.8.0000.5599
Background: Social isolation and loneliness have considerable implications for health. In particular, gender is the most important factor contributing to social isolation and loneliness, with differen...
Background: Social isolation and loneliness have considerable implications for health. In particular, gender is the most important factor contributing to social isolation and loneliness, with different genders adopting different strategies for coping with stress and participating in social interactions. However, because researchers tend to adopt different approaches when examining gender in the field of social isolation, mixed findings have been achieved. Objective: This study conducts a review of intervention programs for social isolation and loneliness, focusing on their consideration of gender. Methods: A scoping review was conducted as per the JBI Manual for the Synthesis of Evidence. A comprehensive literature search, including hand searching, was conducted across six English-language databases for articles and reports published in 2013–2023, with the papers retrieved by three co-authors. The study’s search strategy was developed in consultation with the librarian at X University. Results: The comprehensive search identified 1,282 relevant articles and reports. Among these, 11 articles were selected for analysis. Women were the majority of the participants in 10 of these studies. In particular, exercise workshops proved to be effective in alleviating social isolation and loneliness, and meditation and laughter therapy programs effectively mitigated loneliness. However, none of the studies considered gender-specific issues when devising their research objectives and outcomes. Conclusions: The study’s findings indicate that in the future, gender should be considered in the planning and execution of intervention programs for individuals experiencing social isolation and loneliness. Crucially, interventions that seek to encourage social interactions or promote social participation without considering gender-specific issues are unlikely to be effective.
Background: Opportunities for neonatal intensive care unit (NICU) training are limited for medical and nursing students owing to patient safety concerns and the complexities of neonatal care. Objectiv...
Background: Opportunities for neonatal intensive care unit (NICU) training are limited for medical and nursing students owing to patient safety concerns and the complexities of neonatal care. Objective: To enhance our understanding of neonatal intensive care, we developed a serious game that simulates a comprehensive NICU experience. Methods: The game was developed over 14 months by six members—a neonatologist, four medical students, and an art student—at a total cost of 10,000 USD. After the storyline was finalized by the neonatologist, the team used TyranoBuilder, a user-friendly visual novel tool, to create the game. The game is divided into six chapters, offers a comprehensive NICU experience, and is available as a free iOS/Android/Steam application. After completing the game, players were invited to participate in an optional survey to gather demographic data and user feedback. Results: The game has been downloaded 2,090 times on iOS and 506 times on Android as of November 2024. Survey responses were obtained from 160 participants, with healthcare professionals and students comprising 46.3% of the participants. The highest proportion of respondents (36.9%) were in the 20–29 age range. The mean scores for game length, difficulty, and gameplay were 3.05, 2.49, and 3.65, respectively, indicating a balanced design. Regarding educational value, the mean scores for empathy with the story, usefulness for knowledge acquisition, and effectiveness of serious games as learning tools were all above four, suggesting proper educational content. Conclusions: We developed a serious game to enhance neonatal care education at a low cost through collaboration between a neonatologist and students.
Background: Despite the growing potential of large language models (LLMs) in mental health services, their application in diagnostic processes remains limited. Objective: This study described the deve...
Background: Despite the growing potential of large language models (LLMs) in mental health services, their application in diagnostic processes remains limited. Objective: This study described the development and evaluation of CapyEngine, an LLM-powered diagnostic tool designed to assist in the diagnosis of mental disorders. Methods: We developed and evaluated CapyEngine through three phases. In Phase 1, we created a symptom database using Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR). We then developed CapyEngine's architecture using LLMs, embedding models, and vector searches. In Phase 2, we conducted interviews and usability tests with mental health professionals (n = 7) to identify challenges in traditional diagnostic practices and potential areas for CapyEngine's application. In Phase 3, we compared CapyEngine's diagnostic accuracy against ChatGPT-4 and clinicians using 35 standardized case scenarios test questions from psychiatry and clinical psychology board exams. Questions were input into CapyEngine and top 10 recommended diagnoses were obtained. ChatGPT-4 was prompted to provide the top ten potential diagnoses for each questions. Clinicians (n = 3) received similar instruction to generate at least 10 potential diagnoses for each question. Responses were then analyzed to determine accuracy within the top 10, top 5, and top 1 diagnoses. Results: CapyEngine achieved 62.86% accuracy for identifying correct diagnoses within the top 10 options, and 48.57% accuracy for top diagnosis. ChatGPT-4 showed 100% accuracy within top 10 and top 5 options, but only 31.43% for top diagnosis. Clinicians outperformed both AI models with 82.86% accuracy within top 10 and 57.14% for top diagnosis. Conclusions: CapyEngine shows promise in augmenting the mental health diagnostic process. Future enhancements will focus on incorporating non-symptom-based diagnostic factors, developing specialized embedding models, and addressing cultural sensitivity. Further research is needed to assess the risks and benefits of integrating AI tools like CapyEngine into clinical workflows and to address barriers to adoption.
Background: In response to the increasing incidence and prevalence of hypertension, Ethiopia has been piloting hypertension control at the primary healthcare level in selected sentinel sites. However,...
Background: In response to the increasing incidence and prevalence of hypertension, Ethiopia has been piloting hypertension control at the primary healthcare level in selected sentinel sites. However, no evaluation has been conducted and its success and failures has not been ascertained. Objective: This study aimed to evaluate on whether sentinel hypertension surveillance system in Mojo City were operating efficiently and effectively Methods: A concurrently embedded mixed design (quantitative/qualitative) study was conducted in two sentinel health centers in Mojo city, Oromia region of Ethiopia. The usefulness and nine system attributes were assessed via key informant interviews, observations and record reviews. The qualitative data were analyzed manually via thematic analysis, whereas quantitative data were analyzed via SPSS Software version 25.0. Results: The study invited 14 key informants, and all were willing to participate in the interview. The completeness and timeliness of reports were 98% and 100%, respectively. The sensitivity, positive predictive value and representativeness were 45.3%, 92.6% and 22%, respectively. Nearly three-fourths (71.4%) of key informants perceived system as flexible, while half thought it as unstable due to factors such as inadequate training and lack of supportive supervision and feedback system. Health facilities didn’t conduct routine data analysis and interpretation nor did they use for action. Conclusions: The surveillance system in Mojo city was simple, flexible, acceptable and predictive but less sensitive, unrepresentative and unstable. There is a need for implementing routine data analysis and use for action, adequate training, and feedback system for optimizing the system's performance and to ensure its sustainability
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking dec...
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking decisions, the effectiveness of social media strategies, and shifts in reputation management practices is crucial for hotels aiming to enhance their digital presence and customer engagement. Objective: The study aims to analyze the influence of social media on consumer behavior, audience engagement, and reputation management in hotel selection and booking decisions as well as compare pre- and post-social media reputation management practices. Methods: Data was collected through surveys and interviews with hotel guests and marketing professionals. The analysis included descriptive statistics and comparative assessments of pre- and post-social media reputation management practices. The effectiveness of various social media strategies was evaluated based on respondent feedback. Results: The findings indicate that promotional offers, user reviews, and visual content significantly influence consumer behavior in hotel selection and booking decisions. Collaboration with influencers, user-generated content, live video content, and social media advertising are the most effective strategies for audience engagement and brand building, each with a 100% effectiveness rate. There is a notable shift in reputation management practices, with a decrease in promptly addressing issues and providing compensation, and an increase in seeking private resolutions through direct messages post-social media. Conclusions: Social media plays a critical role in shaping consumer behavior and brand perception in the hotel industry. Effective social media strategies, particularly those involving influencers and user-generated content, are essential for engaging audiences and building brand identity. The transition to social media has also led to changes in reputation management, emphasizing the importance of balancing transparency with discreet conflict resolution. Hotels should prioritize comprehensive social media strategies that include collaboration with influencers, regular updates, and engaging content. Encouraging positive user-generated content and implementing robust monitoring and response systems are essential. Training staff on social media engagement and conflict resolution can further improve reputation management. Ongoing adaptation to emerging social media trends is crucial for maintaining effectiveness. This study provides valuable insights into the impact of social media on consumer behavior and marketing in the hospitality industry. By identifying effective social media strategies and examining changes in reputation management, it offers practical guidance for hotels seeking to enhance their digital presence and customer engagement. The findings underscore the importance of leveraging social media to achieve greater business success and maintain a positive brand reputation.
Background: Acquired Brain Injury (ABI) is a leading global cause of morbidity; affecting millions who often suffer from a diverse range of complications and limited access to appropriate care. Advan...
Background: Acquired Brain Injury (ABI) is a leading global cause of morbidity; affecting millions who often suffer from a diverse range of complications and limited access to appropriate care. Advances in digital technology offer promising opportunities for more effective and accessible assessments; however, there is limited comprehensive research on the scope and utilization of these innovations. Objective: This scoping review aimed to identify and synthesize contemporary research on digital technologies to aid screening or assessment of ABI complications, in order to uncover trends, themes and priorities for future research. Methods: Using the Arksey and O’Malley framework, a systematic search was conducted across Embase, MEDLINE, and Scopus, with additional searches in four trial registries to capture grey literature. A search string incorporating terms related to “ABI,” “clinical assessment,” and “digital tools” was developed a priory. Studies from 2013 to 2024 leveraging digital technologies for ABI complication assessment were included. Exclusion criteria comprised studies involving bespoke hardware, non-human subjects or review articles. Data synthesis and domain mapping were performed. Results: From 5,293 studies extracted, 88 met inclusion criteria: 2 retrospective studies, 4 qualitative studies, 35 cohort studies, 42 cross-sectional studies, and 5 randomized controlled trials. The median sample included 26 participants with ABI, 51 studies also involved non-ABI participants (median of 10 participants included). Most studies (n=70) focused solely on TBI cases, with 36 exclusively on mild TBI or concussion; 16 included mixed ABI etiologies. Digital platforms varied, with 45 studies using smartphone or tablet technologies, 23 PC or web-based platforms, 11 telemedicine solutions, and 9 virtual reality (VR) platforms.
The predominant research themes included: the use of digital technology to aid in screening for TBI, identifying symptoms or functional outcomes; the assessment of cognition and communication; as well as comprehensive consultation. Most tools were well-tolerated, with accuracy often described as comparable to standard assessments. However, the majority of studies had smaller sample sizes, lacked long term outcomes, were limited in the diversity of patients included, and there were few studies assessing digital tools for comprehensive evaluation. Conclusions: This investigation provides clinicians and researchers with an extensive overview of current research trends, and highlights the need for larger, more rigorous studies to optimize the use of digital technologies in ABI assessment.
Current studies are often small-scale, designed as pilot or feasibility trials, and show variability in their focus, leaving gaps in the assessment of common complications such as pain, seizures, or participation restrictions. Expanding research into underexplored ABI complications, broadening the scope of assessments and including diverse populations will be critical for advancing the field and improving outcomes for individuals with ABI. Clinical Trial: NA
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Heal...
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Health implemented the Philippine Package of Essential Non-Communicable Disease Interventions (Phil PEN) to address this issue. However, healthcare professionals faced challenges in implementing the program due to the cumbersome nature of the multiple forms required for patient risk assessment. To address this, a mobile medical app, the PhilPEN Risk Stratification app, was developed for community health workers (CHWs) using the extreme prototyping framework. Objective: This study aimed to assess the usability of the PhilPEN Risk Stratification app using the (User Version) Mobile App Rating Scale (uMARS) and to determine the utility of uMARS in app development. The secondary objective was to achieve an acceptable (>3 rating) score for the app in uMARS, highlighting the significance of quality monitoring through validated metrics in improving the adoption and continuous iterative development of medical mobile apps. Methods: The study employed a qualitative research methodology, including key informant interviews, linguistic validation, and cognitive debriefing. The extreme prototyping framework was used for app development, involving iterative refinement through progressively functional prototypes. CHWs from a designated health center participated in the app development and evaluation process – providing feedback, using the app to collect data from patients, and rating it through uMARS. Results: The uMARS scores for the PhilPEN Risk Stratification app were above average, with an Objective Quality rating of 4.05 and a Personal Opinion/Subjective Quality rating of 3.25. The mobile app also garnered a 3.88-star rating. Under Objective Quality, the app scored well in Functionality (4.19), Aesthetics (4.08), and Information (4.41), indicating its accuracy, ease of use, and provision of high-quality information. The Engagement score (3.53) was lower due to the app's primary focus on healthcare rather than entertainment. Conclusions: The study demonstrated the effectiveness of the extreme prototyping framework in developing a medical mobile app and the utility of uMARS not only as a metric, but also as a guide for authoring high-quality mobile health apps. The uMARS metrics were beneficial in setting developer expectations, identifying strengths and weaknesses, and guiding the iterative improvement of the app. Further assessment with more CHWs and patients is recommended. Clinical Trial: N/A
Background: The current Information and Communication Technologies, digital literacy, and ease of access to communication and information devices by nurses provide them with new ways and intention to...
Background: The current Information and Communication Technologies, digital literacy, and ease of access to communication and information devices by nurses provide them with new ways and intention to access information for technical-scientific updating, ensuring the quality and safety of health care. M-learning offers a flexible and accessible alternative for continuing professional education, overcoming barriers such as time constraints and financial burden. Objective: To evaluate the effectiveness of m-learning in nurses’ knowledge retention of Chronic Obstructive Pulmonary Disease self-management, using a Massive Open Online Course with integrated virtual clinical simulation. Methods: A quasi-experimental pre- and post-test study was conducted, with no control group, with 168 nurses from a Portuguese hospital. The intervention included an asynchronous online course with 13 modules. Knowledge retention was assessed by comparing the mean scores before and after the course. Results: The results indicated a significant increase in knowledge retention. The participants' average score increased from 59.97% in the initial assessment to 84.05% in the final assessment (p<.001). Nurses with a master's degree exhibited a higher level of basic knowledge than those with a bachelor's degree. The course completion rate was 93.45%, reflecting significant engagement facilitated by gamification elements and the content’s clinical relevance. Conclusions: M-learning is useful in nurses' lifelong learning, offering flexibility and more effective support for clinical practice. Integrating virtual simulation and gamification boosted motivation and reduced drop-out rates, highlighting the potential of m-learning in lifelong learning in healthcare. This study confirms the effectiveness of m-learning in improving knowledge retention in nursing. This strategy is a valuable approach to lifelong learning, promoting quality and safety in delivering healthcare.
Background: Surgical consent forms must convey critical information, yet their complex language can limit patient comprehension. Large language models (LLMs) may improve readability, but evidence of t...
Background: Surgical consent forms must convey critical information, yet their complex language can limit patient comprehension. Large language models (LLMs) may improve readability, but evidence of their impact on content preservation is lacking in non-English contexts. Objective: This study evaluates the impact of LLM-assisted editing on the readability and content quality of surgical consent forms in Korean, focusing on standardized liver resection consent documents across multiple institutions. Methods: Standardized liver resection consent forms were collected from seven South Korean medical institutions and simplified using ChatGPT-4o. Readability was assessed using KReaD and Natmal indices, while text structure was evaluated based on character count, word count, sentence count, words per sentence, and difficult word ratio. Content quality was analyzed across four domains—Risk, Benefit, Alternative, and Overall Impression—using evaluations from seven liver resection specialists. Statistical comparisons were conducted using paired t-tests, and a linear mixed-effects model (LME) was applied to account for institutional and evaluator variability. Results: AI-assisted editing significantly improved readability, reducing the KReaD score from 1777 ± 28.47 to 1335.6 ± 59.95 (p<0.001) and the Natmal score from 1452.3 ± 88.67 to 1245.3 ± 96.96 (p=0.007). Sentence length and difficult word ratio decreased significantly, contributing to increased accessibility. However, content quality analysis showed a decline in risk description scores (2.29 ± 0.47 before vs. 1.92 ± 0.32 after, p=0.0549) and overall impression scores (2.21 ± 0.49 before vs. 1.71 ± 0.64 after, p=0.134). The LME confirmed significant reductions in risk descriptions (β₁ = -0.371, p=0.012) and overall impression (β₁ = -0.500, p=0.025), suggesting potential omissions in critical safety information. Despite this, qualitative analysis indicated that evaluators did not find explicit omissions but perceived the text as overly simplified and less professional. Conclusions: While LLM-assisted surgical consent forms significantly enhance readability, they may compromise certain aspects of content completeness, particularly in risk disclosure. These findings highlight the need for a balanced approach that maintains accessibility while ensuring medical and legal accuracy. Future research should include patient-centered evaluations to assess comprehension and informed decision-making, as well as broader multilingual validation to determine LLM applicability across diverse healthcare settings. Clinical Trial: N/A
Background: In online health communities, signaling theory has been widely applied to address information asymmetry and reduce uncertainty. Specifically, various signals are evaluated to convey the qu...
Background: In online health communities, signaling theory has been widely applied to address information asymmetry and reduce uncertainty. Specifically, various signals are evaluated to convey the quality of healthcare services and influence patients' decision-making. However, the literature on signals in online health communities faces challenges, including arbitrary and fragmented classifications of signals and the lack of a common framework. Objective: To establish a common foundation for understanding the role of signals in online health communities, this study aims to provide a comprehensive framework for the signals conveyed in these communities and their influence on managing information asymmetry between physicians and patients. Methods: A systematic literature review using Narrative Analysis was conducted, summarizing 80 articles on signals in online health communities. The review aimed to classify, clarify, and explore the nature of these signals, their relationships, and the underlying mechanisms in the context of OHCs. Results: Among the 80 studies analyzed, 96.3% focused on the effects of one or more signals. However, only 2.5% examined the characteristics of signalers or their moderating effects, such as age, gender, and competence. Additionally, 31.3% explored signal interactions, including comparisons between online and offline signals and bundled services, while 30% investigated how environmental factors, such as uncertainty and consistency, affect signal transmission. Most studies (75%) concentrated on informative signals, with a notable increase in research on affective signals. Lastly, research on the interaction between affective signals and the environment remains limited. Conclusions: This framework provides a more comprehensive understanding of how signals in online health communities manage information asymmetry. It clarifies the construct of signals, explores their relationships, and outlines their mechanisms. Additionally, the study identifies gaps in the existing literature and offers recommendations for future research directions to enhance the role of online health communities in addressing medical asymmetry.
Background: Lupus erythematosus (LE) is a chronic autoimmune disease that significantly impacts patients' quality of life. Photosensitivity is a key impairment that severely limits the quality of life...
Background: Lupus erythematosus (LE) is a chronic autoimmune disease that significantly impacts patients' quality of life. Photosensitivity is a key impairment that severely limits the quality of life, especially in cutaneous lupus erythematosus (CLE), where exposure to sunlight can lead to rashes, exacerbations, and pain. In systemic lupus erythematosus (SLE), other manifestations such as joint pain, fatigue, and organ damage may contribute to decreased physical function and emotional distress. Mobile health applications (MHA) offer potential support for comprehensive disease management for the symptoms mentioned above. However, there is a lack of systematic analysis of available lupus management apps. Objective: This study aims to systematically identify publicly available German or English MHA for lupus management as well as to assess their quality by surveying both patients and physicians. Methods: A systematic search and assessment of German or English mobile apps for patients with lupus available in the Google Play Store and Apple App Store was conducted independently by two reviewers. The two apps that met all relevant criteria were then reviewed independently by seven physicians using the German Mobile Application Rating Scale (MARS) and the System Usability Scale (SUS). Subsequently, they were reviewed by five patients (three with SLE and two with CLE), using the user version of MARS (uMARS) and SUS. Additionally, the affinity for technology interaction (ATI) scale was collected from both patients and physicians to evaluate the technical affinity in both groups. Results: In total, 29 apps were available on the Apple Store and 26 on the Google Store, with 18 apps being present and downloadable on both platforms. Of the 18 apps, 16 were excluded because they did not meet the inclusion and exclusion criteria. Only two apps, Lupus Log and Lupus Minder met all the required criteria and were included in the study. The mean MARS scores varied from 2.61/5 to 4.17/5 and mean SUS from 17.5/100 to 100/100 between physicians. The app with the highest mean overall MARS score was Lupus Log, which was rated with 3.91/5 on average by the physicians. Patients evaluated the app with a comparably mean uMARS score (3.95/5). Technical affinity, objectified by ATI, was higher in patients than physicians (3.9 vs 3.68). Conclusions: Systematic identification and evaluation showed high-quality apps for patient-centered lupus MHA as indicated by MARS and uMARS scores greater than 3 for both Lupus Log and Lupus Minder.
Background: Diagnosis of Hidradenitis Suppurativa (HS) can take seven to ten years and has a long and complicated disease course. Many individuals may turn to online resources to gather information ab...
Background: Diagnosis of Hidradenitis Suppurativa (HS) can take seven to ten years and has a long and complicated disease course. Many individuals may turn to online resources to gather information about their condition. While online resources can promote greater shared decision making and improve communication between patients and physicians, poor quality and low readability of websites can mislead patients with incorrect information. Objective: This study’s aim was to evaluate the quality and readability of HS websites found on Google and Bing in order to identify reliable, well-written resources that could help patients better understand their condition. Methods: The DISCERN Instrument and Flesch-Kincaid Readability metrics were used to evaluate the quality and readability of HS websites. Results: Google and Bing had average DISCERN scores of 54.05 and 59.83, respectively. Ten of the websites were either written or reviewed by a physician. Websites written or reviewed by physicians had statistically significant higher DISCERN scores (p = .018). The average reading grade level for Google was 10.78 ± 2.40, while the average for Bing was 10.48 ± 1.87. The NIH recommends that medical information be written at a 6th to 7th grade reading level. Of the ten articles written or reviewed by physicians, half of those articles met this criterion (Table 5). Conclusions: This study highlights the variable quality and readability of HS websites available on Google and Bing. Additionally, it provides websites that meet both high-quality standards and the NIH-recommended reading grade level.
Background: The pervasive use of smartphones has led to increased connectivity but also heightened levels of distraction and stress. While the Do not Disturb (DnD) mode was introduced to mitigate thes...
Background: The pervasive use of smartphones has led to increased connectivity but also heightened levels of distraction and stress. While the Do not Disturb (DnD) mode was introduced to mitigate these issues, its current implementations often lack the sophistication needed to effectively balance connectivity and focused work time. This study explores the potential of an enhanced DND feature, termed "Exceptional Apps Notification," which allows users to receive notifications from selected apps while silencing others, aiming to reduce stress and improve focus. Objective: The primary objective of this study was to evaluate the impact of a modified DND mode on users' stress levels, productivity, and overall satisfaction. Specifically, we aimed to assess whether a personalized DND mode could alleviate concerns about missing important notifications while promoting periods of focused work. Methods: A mixed-methods approach was employed, involving user testing with a working prototype, structured questionnaires, physiological feedback, and semi-structured interviews. The study included 130 participants: 50 students, 60 professionals, and 20 retirees. Participants interacted with the prototype, which allowed them to customize notification settings and schedule DND activation. Data were collected through the I-PANAS-SF questionnaire, user experience surveys, and smartphone tracking apps to measure stress levels, productivity, and app usage patterns. Quantitative data were analyzed using SPSS, while qualitative data were coded and analyzed using ATLAS.ti 9. Results: The results indicated significant improvements in stress reduction and productivity. Specifically, 82% of participants reported a decrease in negative emotions such as nervousness and anxiety when using the modified DND mode. Additionally, 75% of participants reported higher levels of positive affect, including increased focus and determination. Physiologically, participants exhibited reduced stress indicators, such as lower heart rate variability, during DND activation. Productivity metrics showed a 28% decrease in phone unlocks during focused work periods and a 35% increase in time spent on productivity apps. Social media usage during work hours dropped by 42%. User experience surveys revealed that 88% of participants found the modified DND mode intuitive and user-friendly, with 82% expressing increased confidence in managing notifications. Furthermore, 91% of participants appreciated the ability to manage notifications effectively without feeling overwhelmed. Conclusions: The findings suggest that a personalized DND mode, such as the "Exceptional Apps Notification" feature, can significantly reduce stress, enhance focus, and improve user satisfaction. By allowing users to selectively receive notifications from important apps, the modified DND mode addresses the psychological burden of constant connectivity and mitigates the fear of missing out (FOMO). These results have important implications for smartphone interface design, highlighting the need for more customizable and context-aware notification systems. Future research should explore long-term effects and further customization options to optimize user experience and productivity.
Background: There is growing awareness of the broader health-related harms of social media, yet research on social media-related injury mortality and morbidity remains limited. Emerging evidence sugge...
Background: There is growing awareness of the broader health-related harms of social media, yet research on social media-related injury mortality and morbidity remains limited. Emerging evidence suggests links between excessive social media use and increased risks of self-harm, cyberbullying-related distress, and dangerous viral challenges but there has been limited research on the link between time spent on social media and environmental risk taking, such as risky selfies. However, comprehensive epidemiological studies and policy-driven interventions remain scarce, highlighting the need for further investigation into the public health implications of digital engagement. Objective: This research aimed to examine the relationship between self-reported time spent on social media, influencer status, and risk-taking behaviours among Australians, considering implications for injury prevention. Methods: Via a national cross-sectional survey of Australian social media users (n = 509) participants reported their average daily time spent on social media, whether they identified as a social media influencer, and if they had ever engaged in risk-taking behaviour to create social media content. Chi-square and t-tests were conducted to explore associations. Results: Among participants, 9.4% self-reported engaging in risk-taking behaviour in the outdoors. Influencers were significantly more likely to report risk-taking (48.3%) compared to non-influencers (4.4%) (χ² = 110.57, p < 0.001). Risk-takers also spent significantly more time on social media (M = 2.05, SD = 1.04) compared to non-risk-takers (M = 1.37, SD = 1.04), t(57.22) = 4.31, p < 0.001. Conclusions: Interventions such as real-time alerts, pop-up warnings, and geolocated safety information may help curb risky behaviours among social media users, particularly influencers and heavy users. Social media platforms and policymakers should collaborate to promote safer behaviours and raise awareness about the risks associated with creating content. Targeted interventions for heavy social media users, and those that consider themselves social media influencers are required to reduce risky behaviours that may lead to injury in outdoor settings.
Background: Social media has become a vital source of cancer-related health information, offering patients, caregivers, and the public a platform for sharing knowledge and experiences. However, concer...
Background: Social media has become a vital source of cancer-related health information, offering patients, caregivers, and the public a platform for sharing knowledge and experiences. However, concerns regarding the quality, accuracy, and potential misinformation of cancer information on social media persist. Objective: This study systematically reviewed literature published between 2014 and 2023 evaluating the quality of cancer-related information on social media. It aimed to identify common characteristics of these studies, assess patterns in information quality across platforms and cancer types, and explore factors associated with study outcomes. Methods: This systematic review searched PubMed, Web of Science, Scopus, and Medline. Studies were included if they analyzed cancer-related social media content and assessed information quality using standardized tools (e.g., the DISCERN tool). Extracted data included study characteristics, social media platform, cancer types, and quality assessment methods. Meta-analysis and ordinal logistic regression analysis were performed to pool findings from multiple studies. Results: A total of 75 studies were included, covering various a range of social media platforms, such as YouTube, TikTok, Facebook, Twitter, and Reddit. Findings indicated that video-based platforms, particularly YouTube and TikTok, were the most studied but also contained misinformation. Overall, 27% of social media cancer-related content included misinformation, with common false claims regarding alternative treatments and unproven therapies. Studies assessing rare cancers reported lower information quality compared to those focusing on common cancers. Additionally, content from medical professionals was of higher quality but less engaging than user-generated content. Conclusions: While social media serves as an essential platform for cancer-related health information, concerns remain about misinformation, completeness, and actionability. Future research should prioritize improving information accuracy, leveraging AI for content verification, and promoting authoritative sources to enhance public health outcomes.
Background: Diabetic foot ulcers (DFUs) are a life-changing complication of diabetes. There is increasing evidence that plantar temperature monitoring can reduce the incidence and recurrence of DFUs....
Background: Diabetic foot ulcers (DFUs) are a life-changing complication of diabetes. There is increasing evidence that plantar temperature monitoring can reduce the incidence and recurrence of DFUs. Once daily foot temperature monitoring is the current guideline for identifying early signs of foot inflammation and DFUs. However, single readings of physiological signals are known to increase the risk of misdiagnosis when there are fluctuations of the signal throughout the day. Objective: The purpose of this study was to evaluate whether intra-day temperature asymmetry signals were stable or varied as a function of time in individuals at risk of DFU. Methods: Sixty-four participants with diabetes (mean age = 68 ± 13.8 years) were provided with multi-modal sensory insoles (Orpyx® Sensory Insoles) to monitor continuous temperature data at five plantar locations with a frequency of once/minute during a 90-day study window. 1,080 data days, 5400 contralateral temperature asymmetry signals, were included. The Augmented Dickey-Fuller test was used to determine whether the temperature asymmetry signals were stationary (stable) or non-stationary (time-varying). Results: The majority (82%) of temperature asymmetry signals were time-varying, with intra-day fluctuations potentially driven by physiological and environmental factors. Nearly half (44%) of time-varying signals included a mix of concerning (>2.2°C) and non-concerning (≤2.2°C) measurements, indicating the limitations of single measurements in reliably identifying DFU risk. Statistical analysis revealed significant variability in stable versus time-varying patterns both within and across participants. Notably, days with time-varying signals and concerning asymmetry measurements showed dispersion of concerning periods across participants, time windows and days, rather than consistent daily patterns. Conclusions: Continuous monitoring could provide deeper insights into plantar temperature dynamics, uncovering associations with individual-specific factors such as vascular status, historical ulcer locations, activity, gait, and foot anatomy. These findings support the need for personalized monitoring protocols and leveraging continuous data to better inform clinical decision-making.
Background: Adolescence is a critical period for stress vulnerability, with high levels of stress linked to anxiety, depression, ADHD, and sleep problems. While digital mental health interventions (DM...
Background: Adolescence is a critical period for stress vulnerability, with high levels of stress linked to anxiety, depression, ADHD, and sleep problems. While digital mental health interventions (DMHIs) are increasingly used to support adolescent mental health, little is known about their effectiveness in managing stress. Measurement-based collaborative care models (CoCM) in DMHIs may provide a structured approach to addressing adolescent stress, but research on their impact remains limited. Objective: The purpose of this study is to explore the effectiveness of a CoCM DMHI in managing stress among adolescents. We aimed to (1) quantify self-reported stress levels and identify factors associated with elevated stress, (2) assess changes in stress during care, and (3) explore key factors influencing stress reduction. Methods: Adolescents (ages 13-17 years) who receive coaching and therapy through a CoCM DMHI (Bend Health Inc.) completed mental health assessments at enrollment and monthly throughout care. Associations between stress levels and demographic factors, co-occurring mental health symptoms, and caregiver well-being were used to identify predictors of stress, and mixed-effects models were used to assess changes in stress during care. Results: At enrollment, 91.5% of adolescents reported elevated stress. Higher stress levels were associated with co-occurring mental health and sleep problems, as well as female sex (P’s<0.05). Caregiver stress (t2152=3.90, P<.001) and sleep problems (t2152=3.82, P<.001) were linked to adolescent stress, but caregiver burnout was not (t2152=1.02, P=.31). During care, 80.9% of adolescents experienced stress reductions, with improvements emerging after one month. In adolescents with a caregiver reporting co-occurring stress at enrollment, non-elevated caregiver stress during care was associated with larger improvements in adolescent stress (t248.73=-2.27, P=.024). Adolescents with elevated anxiety showed larger stress reductions compared to those with non-elevated anxiety (t3369=-2.77, P=.006). Conclusions: Stress levels were closely linked to co-occurring mental health symptoms and caregiver stress and sleep problems. A CoCM DMHI was effective in reducing adolescent stress, with reductions in caregiver stress and co-occurring elevated anxiety associated with larger improvements, demonstrating its potential for broader stress management. These findings underscore the need for DMHIs to incorporate family-centered approaches, and future research should explore ways to optimize DMHIs for long-term stress reduction and assess their impact on broader mental health outcomes.
Background: Stroke inevitably results in a range of disabilities. Both virtual reality (VR) and mirror therapy (MT) have shown efficacy in stroke rehabilitation. In recent years, the combination of th...
Background: Stroke inevitably results in a range of disabilities. Both virtual reality (VR) and mirror therapy (MT) have shown efficacy in stroke rehabilitation. In recent years, the combination of these two approaches has emerged as a potential treatment for stroke patients. Objective: This systematic review and meta-analysis aim to assess the efficacy of combination treatment of VR and MT in stroke rehabilitation. Methods: Five electronic databases were systematically searched for relevant articles published up to Jan. 2025. Randomized controlled trials (RCTs) that investigated combination treatment of VR and MT for participants with stroke were included. The risk of bias and the certainty of the evidence were assessed using the Cochrane collaboration’s tool and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guideline, respectively. Results: A total of 293 participants across 10 RCTs were included, with 6 RCTs contributing to the meta-analysis. The statistical analysis indicated significant improvements in the Fugl-Meyer Assessment of Upper Extremity (FMA-UE) (MD 3.49, 95% CI 1.43 to 5.55; P=0.0009) and manual function test (MD 2.64, 95% CI 1.78 to 3.49; P<0.00001), and box and block test (MD 1.02, 95% CI 0.16 to 1.88; P=0.02). Subgroup differences were observed in FMA-UE, manual function test and box and block test. Conclusions: Moderate-quality evidence supports the combination treatment of VR and MT as a beneficial nonpharmacological approach to improve upper extremity motor function and hand dexterity in patients with stroke. However, the limited number of studies and small sample sizes restrict the generalizability of these findings, highlighting the need for further research. Clinical Trial: PROSPERO CRD42024572150
Background: Monitoring motor symptoms in patients with Parkinson’s disease (PD) presents significant challenges due to the complex nature of symptom progression, variations in medication responses,...
Background: Monitoring motor symptoms in patients with Parkinson’s disease (PD) presents significant challenges due to the complex nature of symptom progression, variations in medication responses, and the fluctuations that can occur throughout the day. Traditional neurological visits provide only a limited perspective of a patient’s overall condition, with challenges in achieving accurate and objective assessments of symptoms. To bridge this gap, extended monitoring in non-clinical settings could play a critical role in personalizing treatments and improving their efficacy. Wearable devices have emerged as potential tools for assessing PD symptom severity; however, studies integrating both in-clinic and free-living conditions, as well as multi-day monitoring, remain scarce. Defining digital biomarkers that provide valuable insights into motor symptoms could enable comprehensive monitoring and tracking of PD in various contexts, facilitating more precise medication adjustments and the implementation of advanced therapeutic strategies. Objective: The objective of this study is to collect a dataset for the proposal and definition of digital biomarkers of PD motor symptoms using wearable devices. Data will be gathered in a supervised setting and continuously in a remote free-living context during their normal daily activities, including PD patients and healthy controls. The goal is to identify reliable digital biomarkers that can effectively distinguish PD patients from healthy controls and classify disease severity in both supervised and unsupervised free-living environments. Methods: This article outlines a protocol for an observational case-control study aimed at assessing motor symptoms in PD patients using a smartwatch. The smartwatch will record accelerometer, gyroscope, and physical activity data. Participants will be instructed to perform a series of exercises guided via an smartphone. Measurements will be collected in two settings: a supervised clinical environment, with motor symptoms assessments conducted at the beginning and end of the study, and in an unsupervised free-living context for one week. In both settings, participants will be required to wear the smartwatch while performing the same set of exercises. In their daily routine, participants will be required to wear the smartwatch continuously throughout the day, removing it only at night for charging. Results: Subject recruitment and data collection started in December 2024 and will proceed until the spring of 2025. The planned sample size includes 15 participants with PD and 15 healthy controls. Conclusions: This study aims to generate a dataset of accelerometer and gyroscope signal data recorded from PD patients at various stages of the disease, alongside data from a control group, enabling robust comparative and impactful analyses. Additionally, the study seeks to develop analytical techniques capable of tracking PD symptoms in real-life scenarios, both in everyday settings and clinical environments. Clinical Trial: ClinicalTrials.gov NCT06817772; https://clinicaltrials.gov/ct2/show/NCT06817772
Background: Although the integration of self-monitored patient data into mental health care offers potential for advancing personalised approaches, its application in clinical practice remains largely...
Background: Although the integration of self-monitored patient data into mental health care offers potential for advancing personalised approaches, its application in clinical practice remains largely underexplored. Capturing individuals' mental health outside the therapy room using Experience Sampling Methods (ESM) may bridge this gap by supporting shared decision-making and personalised interventions. Objective: This qualitative study investigated the perspectives of German mental health professionals regarding prototypes of ESM data visualisations designed for integration into a digital mental health tool. Methods: Semi-structured interviews were conducted with clinicians on their perceptions of such visualisations in routine care. Results: Using reflexive thematic analysis, three key findings were: (1) visualisations were seen as valuable tools for enhancing patient motivation and engagement; (2) simplicity and clarity of visual formats were crucial for usability; and (3) practical concerns, such as integration into clinical workflows, influenced perceived utility. Challenges, including the risk of cognitive overload, were also highlighted. Conclusions: These findings underline the importance of designing digital tools that align with clinical needs while addressing potential barriers to implementation by exploring the opportunities and challenges associated with ESM visualisations.
Background: Surgeons often face challenges in distinguishing between benign and malignant follicular thyroid neoplasms (FTNs), particularly for small tumors, until diagnostic surgery is performed. Obj...
Background: Surgeons often face challenges in distinguishing between benign and malignant follicular thyroid neoplasms (FTNs), particularly for small tumors, until diagnostic surgery is performed. Objective: This study aimed to identify the size-specific predictors for malignancy risk of FTNs preoperatively. Methods: A retrospective cohort study was conducted at Peking University Third Hospital in Beijing, China, from 2012 to 2023. Patients with a postoperative pathological diagnosis of follicular thyroid adenoma (FTA) or carcinoma (FTC) were included. FTNs were classified into small- and large-sized categories based on the cutoff value of tumor diameter derived from spline regression, which indicated the turning point of malignancy risk. We identified the 5 most important predictors from 22 variables including demography, sonography, and hormones, using machine learning methods. We also calculated odds ratios (OR) with 95% confidence intervals (CI) for these predictors in both small- and large-sized FTNs. To synthesize existing evidence on this topic, a literature review of clinical guidelines and research papers was conducted. Results: Altogether, we included 1494 FTNs, comprising 1266 FTAs and 228 FTCs. FTNs with a maximum diameter smaller than 3.0 cm were grouped as small-sized (n = 715), while those with larger diameters were categorized as large-sized (n = 779). In the small-sized group, tumors appearing macrocalcification [OR (95%CI): 2.90 (1.50, 5.60)], peripheral calcification [4.50 (1.50, 13.00)], or in younger patients [1.33 (1.05, 1.69)] showed a higher malignancy risk. In the large-sized group, tumors presenting a nodule-in-nodule appearance [3.30 (1.30,7.90)] exhibited a higher malignancy risk. In both groups, lower TSH levels [small-sized FTNs: 1.49 (1,20, 1,85); large-sized FTNs: 1.61 (1.37, 1.96)] and larger mean diameter [small-sized FTNs: 1.40 (1.10, 1.70); large-sized FTNs: 1.50 (1.20, 1.70)] were associated with the malignancy risk of FTNs. Our review identified a research gap in using tumor size thresholds as a stratification factor for assessing malignancy risk in FTNs before diagnostic surgery, which is not addressed by current clinical guidelines or existing literature. Conclusions: This study identified size-specific predictors for malignancy risk in FTNs, highlighting the importance of stratified prediction based on tumor size. Clinical Trial: This study has been registered publicly.
Background: Diabetes remains a critical public health issue, affecting millions of individuals and contributing to significant healthcare costs. Patients with diabetes face elevated risks of emergency...
Background: Diabetes remains a critical public health issue, affecting millions of individuals and contributing to significant healthcare costs. Patients with diabetes face elevated risks of emergency department (ED) visits post-hospitalization, driven by both clinical factors, such as comorbidities, and social determinants of health (SDoH). Objective: Effective and interpretable predictive models can help identify high-risk patients with diabetes to improve case management post-hospitalization. Methods: This study utilized retrospective cohort data from the University of Alabama at Birmingham Medical Center, focusing on patients with diabetes hospitalized between January 2020 and March 2024. Clinical data and SDoH variables were integrated into logistic regression, decision trees, and XGBoost models to predict ED visits within three months post-hospitalization. Performance metrics, such as area under the curve (AUC), precision, sensitivity, and specificity, were used to evaluate the models. Results: Key predictors of ED visits included past three-month ED visits, insulin usage (aspart and glargine), and SDoH indicators such as the Area Deprivation Index and Social Vulnerability Index. XGBoost demonstrated excellent performance with an AUC of 0.846, outperforming both decision trees and logistic regression, though decision trees provided greater interpretability. Conclusions: This study highlights the importance of integrating clinical and SDoH factors to predict post-hospitalization ED visits, supporting care management in patients with diabetes. While XGBoost provided superior prediction performance, the trade-off between accuracy and interpretability suggests that decision trees may be better suited for contexts where actionable in-sights are critical for care providers. Future research should address missing SDoH data and explore the real-world applicability and scalability of these models.
Background: Qualitative research appraisal faces challenges in systematic reviews due to methodological diversity and human variability in applying assessment tools like CASP, JBI, and ETQS. While AI...
Background: Qualitative research appraisal faces challenges in systematic reviews due to methodological diversity and human variability in applying assessment tools like CASP, JBI, and ETQS. While AI shows promise for scaling quality assessments, its reliability in qualitative contexts remains understudied. Existing literature focuses on quantitative systematic reviews, leaving a gap in understanding AI's capacity to interpret nuanced criteria (e.g., policy implications, generalizability) central to qualitative rigor. Objective: To evaluate inter-rater agreement among five AI models (GPT-3.5, Claude 3.5, Sonar Huge, GPT-4, Claude 3 Opus) when assessing qualitative studies using three standardized tools (CASP, JBI, ETQS), and to identify architectural influences on appraisal consistency. Methods: Models: Five AI architectures (proprietary/open-source)
Tools: CASP (methodological rigor), JBI (objective-method alignment), ETQS (contextual integrity)
Data: Three health science qualitative studies
Protocol:
Full-text articles and assessment criteria provided to models
Structured outputs collected for 192 assessments (3 studies × 3 tools × 5 models)
Krippendorff’s α for inter-rater agreement; Cramer’s V for model alignment
Sensitivity analysis via sequential model exclusion Results: Systematic affirmation bias: "Yes" rates 75.9% (Claude 3 Opus) to 85.4% (Claude 3.5)
GPT-4 divergence: 59.9% "Yes" rate with 35.9% uncertainty ("Can’t Tell")
Inter-rater agreement:
CASP baseline α=0.653 (+20% when excluding GPT-4)
ETQS lowest agreement (α=0.376), maximal disagreements on policy implications (Item 35) and generalizability (Item 36)
Proprietary model alignment: GPT-3.5/Claude 3.5 showed near-perfect concordance (Cramer’s V=0.891, p<.001) Conclusions: AI models exhibit tool-dependent reliability, with proprietary architectures enhancing consensus but struggling with contextual criteria.
While AI augments efficiency (e.g., 20% CASP agreement gain via GPT-4 exclusion), human oversight remains critical for nuanced appraisal.
Hybrid frameworks balancing AI scalability with expert interpretation are recommended. Clinical Trial: Not applicable.
Background: Cancer survivors often experience declining engagement in digital health interventions (DHIs). However, predictors of engagement with provider-guided DHIs remain unclear. Nurse WRITE, an e...
Background: Cancer survivors often experience declining engagement in digital health interventions (DHIs). However, predictors of engagement with provider-guided DHIs remain unclear. Nurse WRITE, an effective 8-week nurse-directed symptom management DHI, offers an opportunity to identify factors influencing engagement and enhance intervention efficacy evaluation. Objective: This study aims to (1) understand engagement phenomena (dimensions, influencing factors, and challenges) and (2) assess the relationship between engagement and patient symptom control in Nurse WRITE. Methods: The study included 68 women with recurrent ovarian cancer randomized to the Nurse WRITE arm of a 3-arm symptom management trial. We analyzed socio-affective and cognitive engagement through message board data and behavioral engagement using website usage data. Regression analyses examined patient characteristics, engagement, and symptom control perceptions. Through content analysis, we explored participant challenges and activities before disengagement. Results: Regarding influencing factors, higher education was associated with a 22% increased likelihood of engagement (p = 0.04). Education positively influenced cognitive and socio-affective engagement, including total count of cognitive classes (p = 0.01) and total word count (p = 0.03), with marginal associations for socio-affective classes (p = 0.07). Comorbidities tended to reduce both socio-affective (p = 0.06) and cognitive class counts (p = 0.07). Regarding behavioral engagement, education increased the odds of completing an extra symptom care plan by 23% (p = 0.02) and a plan review by 29% (p = 0.02). We observed a trend that higher symptom severity increased the odds of completing an additional plan review by 21% (p = 0.09). The most common engagement challenges included worsening health and treatment, busy family life, and website difficulties. Moderate- and low-engagers also experienced confusion about the intervention timeline and process. Among low engagers, 63% discontinued communication at specific intervention phases: introduction (33%), symptom representational assessment (21%), and goal setting and planning (21%).
At the end of the intervention, improved symptom control was associated with higher overall engagement (p = 0.02), a higher frequency of cognitive engagement classes (p = 0.02), average question completion percentage (p = 0.01), a higher frequency of socio-affective classes (p = 0.01) and total word count (p = 0.01), as well as more completed symptom care plans (p = 0.04). Conclusions: Participant education level significantly influenced Nurse WRITE engagement across socio-affective, cognitive, and behavioral dimensions. Comorbidity and symptom severity warrant further investigation. Future provider-guided DHIs should employ additional strategies to engage less well-educated participants, address challenges like health issues and family activities, and re-engage participants during critical phases. Our findings underscore the importance of meaningful engagement through socio-affective, cognitive, and behavioral dimensions in Nurse WRITE, informing future DHIs to aim for a balance of protocol adherence and flexibility to enhance engagement and improve outcomes.
Background: Intrauterine devices (IUD) are safe and effective contraceptive therapies which are also used for treatment for heavy menstrual bleeding, endometrial hyperplasia and early-stage endometria...
Background: Intrauterine devices (IUD) are safe and effective contraceptive therapies which are also used for treatment for heavy menstrual bleeding, endometrial hyperplasia and early-stage endometrial cancer. Barriers to insertion of IUDs in the outpatient setting are predominantly due to patient discomfort and there is little consensus on effective analgesic strategies to address this. Virtual reality (VR) has demonstrated moderate benefits in acute pain management and has been explored for similar gynaecological procedures including outpatient hysteroscopy with some promising results. Objective: To explore the effectiveness of VR at improving patient pain and anxiety during outpatient IUD insertion. Methods: This randomised control trial compared the use of a VR headset to standard care during IUD insertion in the outpatient clinic setting. Outcomes measured were patient reported pain and anxiety. Secondary outcomes included clinician reported ease of insertion and time required to complete the procedure. Results: A total of 70 patients were recruited with 34 randomised to the control and 36 randomised to VR headset use. Patients with VR headsets reported a pain score of 5.5 +/- 3.2 during IUD insertion, which was not significantly different to 4.3 +/- 3.2 for the control group. Anxiety scores during the procedure were 4.0 +/- 3.0 in the VR group, compared to 4.8 +/- 3.5 in the control group, which was also not significantly different. Anxiety was the most significant predictor of pain and this in turn significantly increased insertion time (p <0.001). Of the patients who did respond and benefit from VR use, their baseline anxiety was significantly less than in those who did not (p <0.05). Conclusions: The use of VR headsets did not significantly alter the pain or anxiety experienced by patients during IUD insertion, however satisfaction and recommendation that others use VR was high which may suggest other benefits to their use. Additionally, pre-procedural anxiety appears to have a significant adverse impact on pain scores and the ability of patients to benefit from the VR headsets. This importantly contributes to the previously ambiguous data regarding VR use for gynaecological procedures and highlights a new important avenue for further research into alleviating anxiety prior to procedures to improve pain and patient experience. Clinical Trial: Australia and New Zealand Clinical Trial registration: ACTRN12622000088741p. https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=383191
Background: This study highlights TikTok’s influence in shaping youth perceptions of nicotine pouches as trendy and relatively harmless. This underlines the need for more restrictions to be applied...
Background: This study highlights TikTok’s influence in shaping youth perceptions of nicotine pouches as trendy and relatively harmless. This underlines the need for more restrictions to be applied on the content that is so readily available for youth. Objective: This study employs a qualitative descriptive design to explore how nicotine pouches (specifically Zyns) are represented on TikTok. Methods: A total of 200 TikTok posts were screened under the #zyn, #zyns, and #nicotinepouch hashtags, with 132 being analyzed. Posts were analyzed using Braun and Clarke’s thematic analysis approach. Collaborative coding ensured reliability and identified key themes in the content. Results: : Five themes emerged: (1) The Zyn Movement; (2) “Boy Heaven”; (3) “Unintended Negative Consequences”; (4) Product Design: “Life doesn’t need to stop”; (5) Physical Benefits: “It’s like IcyHot for your mouth.” Overall, the content heavily condoned nicotine pouch use and normalized it, with males disproportionately represented as the primary users of nicotine pouches. Conclusions: This study highlights TikTok’s influence in shaping youth perceptions of nicotine pouches as trendy and relatively harmless. This underlines the need for more restrictions on the content that is so readily available to youth.
Background: Large Language Models (LLMs) continue to enjoy enterprise-wide adoption in healthcare while evolving in number, size, complexity, cost, and more importantly performance. Performance benchm...
Background: Large Language Models (LLMs) continue to enjoy enterprise-wide adoption in healthcare while evolving in number, size, complexity, cost, and more importantly performance. Performance benchmarks play a critical role in their ranking across community leaderboards and subsequent adoption. Objective: Given the small operating margins of healthcare organizations and growing interest in LLMs and conversational AI, there is an urgent need for objective approaches that can assist in identifying viable LLMs without compromising their performance. The objective of the present study is to generate a taxonomy portrait of medical LLMs (N = 33) whose domain-specific and domain non-specific multivariate performance benchmarks were available from Open-Medical LLM and Open LLM leaderboards on Hugging Face. Methods: Hierarchical clustering of multivariate performance benchmarks is used to generate taxonomy portraits revealing inherent partitioning of the medical LLMs across diverse tasks. While domain-specific taxonomy is generated using nine performance benchmarks related to medicine from the Hugging Face Open-Medical LLM initiative, domain non-specific taxonomy is presented in tandem to assess their performance on a set of six benchmarks on generic tasks from the Hugging Face Open LLM initiative. Subsequently, non-parametric Wilcoxon Ranksum test and linear correlation is used to assess differential changes in the performance benchmarks between two broad groups of LLMs and potential redundancies between the benchmarks. Results: Two broad families of LLMs with statistically significant differences (\alpha = 0.05) in performance benchmarks are identified for each of the taxonomies. Consensus in their performance on the domain-specific, and domain non-specific tasks revealed inherent robustness of these LLMs across diverse tasks. Subsequently, statistically significant correlations between performance benchmarks revealed inherent redundancies, indicating a subset of these benchmarks may be sufficient in assessing the domain-specific performance of medical LLMs. Conclusions: Understanding the medical LLM taxonomies is an important step in identifying LLMs with similar performance while aligning with the needs, economics, and other demands of healthcare organizations. While the focus of the present study is on a subset of medical LLMs from the Hugging Face, enhanced transparency of performance benchmarks and economics across a larger family of medical LLMs is needed to generate more comprehensive taxonomy portraits accelerating their strategic and equitable adoption in healthcare. Clinical Trial: Not applicable
Background: Despite the known psychosocial challenges associated with supporting a loved one using alcohol and other drugs (AOD), there is a scarcity of mental health and wellbeing interventions for a...
Background: Despite the known psychosocial challenges associated with supporting a loved one using alcohol and other drugs (AOD), there is a scarcity of mental health and wellbeing interventions for affected family members and friends (AFFMs). Objective: This pilot study examines the usability, acceptability, and feasibility of the Family and Friend Support Program (FFSP; ffsp.com.au), a world-first, evidence-based online resilience and wellbeing program designed with and for people caring for someone using AOD. Methods: In 2021 (November-December), participants across Australia completed a baseline online cross-sectional survey that assessed impact of caring for a loved one using AOD (adapted Short Questionnaire for Family Members (Affected by Addiction)), and distress levels (Kessler-10 Psychological Distress Scale). Following baseline, participants were invited to interact with FFSP over 10 weeks. Post program and follow up surveys (10 and 14 weeks post-baseline, respectively) and semi-structured interviews assessed usability and acceptability of the program as well as help-seeking barriers. Results: Baseline surveys were completed by 131 AFFMs, with 37% completing the post-program survey and 24% completing the follow-up survey. On average, K-10 scores fell in the moderate to severe range at baseline. Overall, participants found FFSP easy to use and provided them with relevant, helpful, and validating information. Limitations included low program engagement and high attrition. Conclusions: Overall, FFSP appears to be a promising mental health intervention for AFFMs. This study builds on existing research finding high levels of distress among AFFMs, whilst highlighting the ongoing barriers to help-seeking. Limitations and future directions for refinements and efficacy evaluation of FFSP are discussed.
Background: Nursing records are crucial for maintaining patient care quality but impose a significant workload on nurses, diverting attention from direct care. Speech input technology offers hands-fre...
Background: Nursing records are crucial for maintaining patient care quality but impose a significant workload on nurses, diverting attention from direct care. Speech input technology offers hands-free documentation, enabling simultaneous patient care and record-keeping. Despite its advantages, adoption has been slow due to concerns about patient privacy, technical challenges, and integration complexity with electronic medical records. Variability in nursing workflows and system adaptability further hinder implementation. These challenges have resulted in few successful cases being reported. This study examines the application of speech input technology in angiography rooms, which offers controlled environments conducive to testing due to their highly standardized workflows, patient privacy, and screen-free interaction requirements. Objective: This study is aimed to assess the feasibility of speech input technology in real-world clinical settings, focusing on its usability, effectiveness, and adaptability. Additionally, we seek to identify barriers to broader adoption and propose actionable strategies to overcome these challenges, enabling its integration into routine nursing practices. By addressing these barriers, we aim to pave the way for more efficient and accurate nursing documentation. Methods: This study was carried out at a Japanese university hospital, investigating intraoperative documentation in eight cases of PTCD and TACE. Events were recorded using both an existing voice dialogue system and traditional handwritten methods. Comparative analysis was performed to evaluate the number of events captured by each method. Surveys and structured interviews were conducted with nurses to gather qualitative insights into the system’s usability, practicality, and encountered challenges. To ensure patient safety and minimize workflow disruptions, the experimental setup was carefully designed to integrate devices seamlessly into the clinical environment while respecting existing protocols. Results: Speech input demonstrated effectiveness in capturing 50% to 100% of preoperative events and 40% to 100% of intraoperative events. However, its application in the postoperative phase was less effective, recording only 0% to 12% of events. Postoperative challenges were attributed to increased workload, technical issues, the need for repeated clarifications, and communication difficulties with both patients and staff while using the system. Despite these limitations, speech input showed potential for improving documentation efficiency when appropriately implemented. Conclusions: Key factors for successful application of speech input include ensuring patient privacy, starting with simple records, and incorporating backup documentation methods. Experiments conducted in real clinical settings, rather than simulations, emphasized the importance of designs that consider nurse-system interaction and the surrounding environment. While significant challenges remain, the findings highlight the potential of speech input technology to enhance nursing documentation efficiency and accuracy through optimized system designs and task selection. With continued refinement, this technology could reduce nurse workload and improve care quality.
Background: There is a need for scalable and simple interventions for trauma exposed people. In this case series, we built on our previous case study and case series findings and further explored the...
Background: There is a need for scalable and simple interventions for trauma exposed people. In this case series, we built on our previous case study and case series findings and further explored the utility and potential effectiveness of a brief novel intervention to reduce the number of past intrusive memories of trauma. The imagery competing task intervention consists of a memory reminder, and the visuospatial task Tetris played with mental rotation, targeting one intrusive memory at a time. Here we test remote delivery of the intervention, including guidance from researchers without specialist mental health training, in a sample of women in Iceland with current intrusive memories from trauma occurring several years previously. Objective: In a case series of trauma exposed women, we aimed to explore whether this brief novel intervention reduces the number of established intrusive memories (primary outcome) and improves general functioning and symptom reduction in post-traumatic stress, depression and anxiety (secondary outcomes). Acceptability of the intervention along with adaptations i.e., delivery by psychology students without specialist mental health training and digital delivery, were explored. Methods: Participants (N=8) monitored the number of intrusive memories from an index trauma (occurring 3 – 16 years previously) in a daily diary at baseline, during the intervention phase (ranging from three to six weeks), and post intervention at 1-month and 3-month follow-ups. The intervention was delivered digitally with guidance from clinical psychologists/psychology students. A repeated AB design was used (“A”: pre-intervention baseline, “B”: intervention phase). Intrusions were targeted one-by-one, creating repetitions of an AB design (i.e., length of baseline ‘A’ and intervention ‘B’ varied for each memory). Results: The number of intrusive memories reduced for all participants from the baseline phase compared to the intervention phase, although the reduction was minimal for 2 participants (6.3% - 93%). The number of intrusive memories continued to reduce for 6 out of 8 participants (58% - 100% reduction at 1-month follow up; 72% - 100% reduction at 3-month follow up).
Symptoms of post-traumatic stress, depression and anxiety were reduced for most participants post intervention and continued to decrease during the follow-up periods. Functioning was improved for 7 of the 8 participants from baseline to post intervention and continued to improve at the follow-up assessments for three participants. The intervention delivered digitally and partly by students was perceived to be an acceptable way to reduce the frequency of intrusive memories by all participants (mean rating 9.5 out of 10). Conclusions: Data from this case series of traumatized women provide preliminary evidence for the effectiveness of this novel brief intervention in reducing intrusive memories of trauma occurring several years ago and in improving functioning and reducing core symptom burden. This study will inform a randomized controlled trial of this novel intervention, which may have considerable implications for large scale clinical management of traumatized populations. Clinical Trial: The study was approved by the National Bioethics Committee in Iceland (No: VSNb2017110046/03.01).
The study was preregistered prior to study start on ClinicalTrials.gov (NCT04209283) on 2020-11-03.
Background: Extraction retraction orthodontic (ERO) practices are commonly used to treat every malocclusion. Occasionally, patients express dissatisfaction over previous ERO treatment. Objective: This...
Background: Extraction retraction orthodontic (ERO) practices are commonly used to treat every malocclusion. Occasionally, patients express dissatisfaction over previous ERO treatment. Objective: This study investigates patient experience who have had ERO intervention and expressed regret or dissatisfaction with this treatment. Methods: Semi-structured interviews were conducted with patients who have expressed regret over past ERO treatment. Interpretive phenomenological analysis (IPA) was used to derive themes from transcripts. Results: Eleven participants were recruited, gave ongoing informed consent, and participated in the semi-structured interview process. Six major themes were identified through IPA: “ERO Treatment Course”, “Lack of Informed Consent”, “Ocean of Grief and Trauma”, “Multifaceted Health Complaints”, “Finding Solutions and Coping Strategies”, and “Wishing There Was a Better Way”. Participants felt like they were not able to give informed consent for ERO due to a number of different reasons such as being too young, not being given accurate information on the risks, or being influenced parentally, culturally, or by the provider. Participants regret ERO due to a number of multifaceted health complaints including but not limited to sleep breathing disorders, craniofacial pain patterns, neuropsychobehavioural symptoms, and negative aesthetic outcomes, that they believe results from ERO. Conclusions: Patient regret following extraction retraction orthodontics is due to a lack of informed consent and negative health effects. Clinical Trial: CSREB 2022-12-001
Background: Organ donation is a life-saving intervention, yet the demand for organs far exceeds availability. While traditional awareness campaigns have attempted to address this gap, their reliance o...
Background: Organ donation is a life-saving intervention, yet the demand for organs far exceeds availability. While traditional awareness campaigns have attempted to address this gap, their reliance on fear-based messaging may limit effectiveness, particularly among younger audiences. Positive messaging strategies remain underexplored, especially in paediatric health education, despite their potential to foster long-term pro-donation attitudes. Objective: This study aims to evaluate the role of positive messaging in organ donation education. It examines how optimistic and empowering narratives influence attitudes toward organ donation, focusing on pediatric audiences. The research also assesses the effectiveness of The Orgamites Mighty Education Programme in promoting organ donation awareness. Methods: A literature review was conducted using predefined search strategies across multiple academic databases. The review focused on studies published between 2000 and 2025, analyzing different types of positive messaging and their impact on organ donation attitudes and behaviours. Inclusion criteria required studies to be peer-reviewed, focused on positive messaging, and written in English. Results: The findings indicate that positive, gain-framed messages are more effective than fear-based or loss-framed messaging in influencing attitudes toward organ donation. The study highlights the potential of narrative-driven, child-friendly campaigns like The Orgamites Mighty Education Programme to engage young audiences, foster open discussions, and reduce misconceptions. The concept of "Mighty Organs" reframes organ donation in a hopeful and empowering manner, making it more relatable and engaging. Conclusions: Positive messaging can transform organ donation education by shifting the focus from fear to empowerment. Programmes like The Orgamites Mighty Education Programme provide a promising model for fostering long-term pro-donation attitudes among children and adolescents. Future research should assess the long-term impact of such interventions on donor registration rates and explore strategies for integrating them into educational curricula and community outreach initiatives.
Background: Unplanned extubation (UEX) serves as a crucial indicator for monitoring the quality of nursing care and can result in irreversible harm, impacting both adults and children. In adult Intens...
Background: Unplanned extubation (UEX) serves as a crucial indicator for monitoring the quality of nursing care and can result in irreversible harm, impacting both adults and children. In adult Intensive Care Units (ICUs), the incidence of UEX generally ranges from 7% to 18%[23], with endotracheal tube extubation being associated with more severe consequences[24], occurring at a rate of 0.2%-14.6%[25]. For children, literature reports UEX occurrence rates of 1.18%-8.24%[18] and 3.7‰-5.0‰[6], with pediatric ICU rates ranging from 0.43% to 0.79%[1]. The occurrence of unplanned extubation adverse events can impact a child's treatment and surgical outcomes when re-intubation is not immediately feasible, leading to prolonged hospital stays, increased caregiver-patient conflicts, and compromised nursing quality. Objective: Objective: To analyze the high-risk factors of unplanned extubation in children and implement appropriate nursing strategies to reduce the incidence of adverse events related to unplanned extubation, ensuring clinical safety of pediatric patients. Methods: Methods: A retrospective study was conducted on pediatric patients who underwent surgery in general pediatrics from January 2018 to December 2023 and experienced unplanned extubation during hospitalization, excluding cases caused by mental illness. Results: Results: During the perioperative period, a total of 1977 days of intubation were recorded, including 1079 days with urinary catheters, 68 days with gastric tubes, 768 days with postoperative wound drainage tubes, 46 days with peripheral central venous catheters, and 8 days with CVCs. There were 13 instances of unplanned extubation, comprising 8 urinary catheters, 3 gastric tubes, and 2 postoperative wound drainage tubes. The rate of unplanned extubation was 6.58‰, and the re-intubation rate was 15.38%. Conclusions: . The study indicates that unplanned extubation is associated with factors such as the child's own conditions, changes in body position comfort, methods of catheter fixation, timing and date of extubation, and the extent of nurse rounds and education. Therefore, enhancing nurses' awareness of risk prevention and adopting effective restraint measures, adjusting the frequency of adhesive fixation, and improving fixing equipment are key to reducing the risk of unplanned extubation in general pediatrics. Effective nursing strategies significantly lower the risk of unplanned extubatio
Background: Blue Light services, such as police, paramedics, firefighters and other emergency responders, work under challenging conditions commonly associated with unpredictable schedules, complex sh...
Background: Blue Light services, such as police, paramedics, firefighters and other emergency responders, work under challenging conditions commonly associated with unpredictable schedules, complex shift patterns, and high stress levels. These challenges negatively impact their mental wellbeing, physical health, and job performance, leading to potential health concerns such as fatigue, poor sleep, long-term physical disabilities, anxiety, and poor work-life balance. Existing digital interventions fail to address the challenging needs of these shift workers due to focusing solely on conventional 9-to-5 schedules. This gap highlights the need for tailored interventions that integrates shift management with health and wellbeing support for professionals with shift schedules. Objective: The objective of the study is to design and develop a co-created digital wellbeing application that integrates a shift management system. The designed solution aims to address the unique scheduling challenging and associated health concerns faced by Blue Light personnel, so has improve their wellbeing, organisation’s operational efficiency, and job satisfaction. Methods: A qualitative approach was employed for this research by incorporating Design Science Research Methodology (DSRM) with co-creation and user-centred design (UCD) principles. Five co-creation sessions, each lasting two hours, were conducted with key stakeholders, including police officers and shift management experts. Data from these sessions were analysed using thematic analysis to identify recurring patterns, user needs, and design functionalities. Results: The thematic analysis of data collected identified stressors such as inconsistent shift scheduling, limited flexibility, and inadequate support for work-life balance. Participant highlighted the need for a system capable of managing high volume schedule with real-time adaptability. A prototype was developed that included functionalities such bulk creation and modification of schedules, customisation of individual shift times, visualisation tools for monitoring and identifying shift trends and reusable base patterns for efficiency. The research demonstrated that integrating schedule management into a wellbeing app could provide personalised support, such as hydration reminders and relaxation techniques, based on users’ shift schedules. The prototype showed significant potential to reduce stress, enhance scheduling adaptability, and support the health and wellbeing of personnel in high stress profession. Conclusions: The co-created digital wellbeing intervention addresses major gaps in existing digital interventions by combining detailed shift management with health and wellbeing support tailored to the needs of Blue Light personnel. The study shows the importance of shareholder collaboration in designing robust and effective solutions for high-stress professions. Future work will expand the sample size, explore scalability to other emergency services, and incorporate longitudinal assessments to evaluate the system’s impact on reducing stress and improving overall wellbeing. By bridging the gap between operational needs and health requirements, this study offers a framework for developing digital solutions that enhance the quality of life and productivity of essential workers.
Background: Social frailty poses a potential risk even for relatively healthy older adults, necessitating development of early detection and prevention strategies. Recently, consumer-grade wearable de...
Background: Social frailty poses a potential risk even for relatively healthy older adults, necessitating development of early detection and prevention strategies. Recently, consumer-grade wearable devices have gained attention for their ability to provide accurate sensor data, and digital biomarkers for social frailty screening could be calculated from these data. Objective: The objective of this study was to explore digital biomarkers associated with social frailty using sensor data recorded by Fitbit devices and to evaluate their relationship with health outcomes in older adults. Methods: This cross-sectional study was conducted in 102 community-dwelling older adults. Participants attending frailty prevention programs wore devices of the Fitbit Inspire series on their non-dominant wrist for at least seven consecutive days, during which step count and heart rate data were collected. Standardized questionnaires were used to assess the physical functions, cognitive functions, and social frailty, and based on the scores, the participants were categorized into three groups: robust, social pre-frailty, and social frailty. The sensor data were analyzed to calculate nonparametric and extended cosinor rhythm metrics, along with heart rate-related metrics. Results: The final sample included 86 participants who were categorized as robust (n = 28), social pre-frailty (n = 39), and social frailty (n = 19). The mean age of the participants was 77.14 years (SD 5.70), and 90.6% were women (n = 78).
Multinomial logistic regression analysis revealed that the step-based rhythm metric, Intradaily Coefficient of Variation (ICV.st), was significantly associated with social pre-frailty. The heart rate metrics, including the delta resting heart rate (dRHR) and UpMesor.hr, showed significant associations with both social frailty and social pre-frailty. Furthermore, the standard deviation of the heart rate (HR.sd) and alpha.hr were significant predictors of social pre-frailty. Specifically, dRHR, defined as the difference between the overall average heart rate and resting heart rate (RHR), exhibited significant negative associations with social pre-frailty (odds ratio [OR] = 0.82, 95% confidence interval [CI] 0.68-0.97, p = 0.024) and social frailty (OR = 0.74, 95% CI 0.58-0.94, p = 0.015). Furthermore, analysis using a linear regression model revealed a significant association of the ICV.st with the Word List Memory (WM) score, a measure of cognitive decline (β = -0.04, p = 0.024). Conclusions: This study identified associations of novel rhythm and heart rate metrics calculated from the step count and heart rate recorded by Fitbit devices with social frailty. These findings suggest that consumer-grade wearable devices, which are low-cost and accessible, hold promise as tools for evaluating social frailty and its risk factors through enabling calculation of digital biomarkers. Future research should include larger sample sizes and focus on the clinical applications of these findings. Clinical Trial: UMIN-CTR
Background: Patients with maintenance hemodialysis suffer from weakness due to prolonged dialysis treatment, such as the continuous decline of muscle strength in patients, which will have varying degr...
Background: Patients with maintenance hemodialysis suffer from weakness due to prolonged dialysis treatment, such as the continuous decline of muscle strength in patients, which will have varying degrees of impact on human physiological, psychological and social functions. Effective non-pharmacological interventions can improve their mental health and quality of life. Objective: This study aimed to investigate the effects of sitting Baduanjin combined with acupoint massage on improving the frailty status of patients undergoing maintenance hemodialysis (MHD) and evaluate whether it can significantly improve their physical activity, alleviate depressive emotions, and comprehensively improve their quality of life. Methods: This study included 114 patients treated using MHD at the Affiliated Hospital of Chengdu University of Traditional Chinese Medicine between March 2024 and October 2024. A randomized controlled study design was used. Patients who met the inclusion and exclusion criteria were randomly divided into three groups: the control group received only conventional hemodialysis and care, the acupoint massage group received acupoint massage treatment in addition to the control group treatment, and the sitting Baduanjin combined with acupoint massage group that received sitting Baduanjin combined with acupoint massage and the control group treatment. The clinical efficacy was comprehensively evaluated by comparing the FRAIL scale scores, grip strength, 10 times sit-to-stand test, Self-Rating Depression Scale scores, and quality of life (SF-36) scores before and after eight weeks of treatment among the three groups. Statistical analysis was conducted using the Statistical Package for the Social Sciences software (version 25.0). Results: The study began enrollment in September 2024. To date, 114 participants have finished the baseline questionnaires. Conclusions: The results of this study can provide scientific basis for the future treatment of patients with asthenia in hemodialysis. Clinical Trial: This study was registered under the registration number ITMCTR2024000798 on November 12, 2024,
Background: The study examines the content of illicit drug advertisements on X (formerly Twitter) in Thailand. Over the past decade, social media platforms have been utilized to facilitate online subs...
Background: The study examines the content of illicit drug advertisements on X (formerly Twitter) in Thailand. Over the past decade, social media platforms have been utilized to facilitate online substance trade, leveraging their anonymity, ease of access, and user-friendly interfaces. Despite the growth in use of such platforms for drug distribution, there is a paucity of research conducted in Thailand that aimed to grasp the types of substances, marketing strategies, and public health risks associated with this phenomenon. Objective: Inductively explore the content of tweets advertising drugs in the Thai language. Methods: Tweets advertising psychoactive substances in the Thai Language were collected manually between April and July 2024. A qualitative content analysis was performed on collected tweets. Tweets were coded based on five themes: types of substances advertised, marketing strategies, delivery methods, number of substances per tweet, and location references. Intercoder reliability for each theme were assessed using Krippendorff’s Alpha, achieving substantial agreement across most codes. Results: During the data collection period, 3,832 tweets advertising drugs were collected. Most tweets (63.3%) mentioned five or more substances, with depressants like opioids (73.3%), antihistamines (62.5%), and benzodiazepines (52.4%) being the most frequently advertised. Common marketing techniques included direct contact information (74.3%) and fast delivery (31.7%). Delivery methods primarily involved courier services, but tend to offer multiple options at once. Tweets that mentioned at least one sex-performance enhancer were frequently (77.7%) advertised in combination with a benzodiazepine. Conclusions: This study results suggest the presence of a large number of substances advertised for sale on the Thai X chatter. This digital form of drug trading is facilitated by possible direct messaging and the large number of courier services existing in Thailand. Our findings call for the development of real-time monitoring systems harnessing drug-related data from social media to inform public health practitioners about emerging substances and trends and address the challenges posed by the digital drug trade.
Background: The six-minute walk test (6MWT) measures exercise capacity in cardiorespiratory, neurological and musculoskeletal conditions. It consists in observing how far a patient can walk in 6 minut...
Background: The six-minute walk test (6MWT) measures exercise capacity in cardiorespiratory, neurological and musculoskeletal conditions. It consists in observing how far a patient can walk in 6 minutes and is usually performed in a corridor in a clinic. During the COVID 19 pandemic, as healthcare systems cancelled nonurgent outpatient appointments, many tests were done online. At the Oxford University Hospitals, cardiac patients were asked to use the open-source Timed Walk app top perform 6MWT in their community, as a substitute for the regular tests in clinic. Objective: (1) To assess participation and user acceptance of the Timed Walk app, (2) to assess the clinical usefulness of the app within the context of the pandemic, and (3) to validate and improve the algorithms that compute the walked distance from the sensors data collected by the phone. Methods: Consented cardiac patients were invited to perform a 6MWT, outdoor, using the app, at least once a month, and report the results at periodic telephone calls and visits. Any clinical decision taken based on the results of the app was registered. Patients were also sent a usability and acceptance questionnaire and 10 of the respondents were selected for interviews. A group of 12 volunteers also provided sensors data collected by the app and a trundle wheel to measure reference distance for 10 tests, 5 of which were intentionally performed without following instructions to walk over straight paths. Results: the study run between 2021-09-29 and 2022-12-30. 55 participants consented (25 female, age: 44.80 ± 17.49)
1) Twenty-four patients performed one or more tests per month, average number of 6MWTs per month per patient was 1.14 ± 1.20. Usability was rated high on all dimensions; acceptance was high except intention to use the app beyond the study. Thematic analysis of the interviews provides useful insights on 3 themes:
2) 741 events were logged. 24% of 51 medical decisions involving 23% of 48 patients who performed at least 1 test, were influenced by the app-based 6MWT. Between 2018 and 2023 a cohort of 49 patients conducted 63 6MWT in the clinic (18 in 2021), whereas the same patients performed 605 tests using the app only in 2021.
3) Sensor data was sent for 107 tests, 52 not following instructions. Difference between reference distance and app distance was within minimal clinically significant difference for tests performed following instructions (limits of agreement: -27m, 34m). Anonymized data has been made publicly available. Conclusions: The use of the Timed Walk app for remote 6MWT allowed clinicians to obtain objective indications of the status of the patient during the pandemic. The distance estimated by the app is accurate when patients follow instructions. Motivation to use the app can vary depending on internal factors such as attitudes and health status, and external factors such as weather, fit into everyday life, how the data is used by clinicians and forgetfulness. Clinical Trial: ClinicalTrials.gov NCT05096819
Background: Latina adolescents report low levels of moderate-vigorous physical activity (MVPA) and high lifetime risk of lifestyle-related diseases. There is a lack of MVPA interventions targeted at t...
Background: Latina adolescents report low levels of moderate-vigorous physical activity (MVPA) and high lifetime risk of lifestyle-related diseases. There is a lack of MVPA interventions targeted at this demographic despite documented health disparities. Given their high rates of using mobile technology, interventions delivered through mobile devices may be effective for this population. Objective: The current paper examines efficacy of the Chicas Fuertes intervention in increasing MVPA across six months in Latina adolescents. Methods: Participants were Latina adolescents (ages 13-18) in San Diego County who reported being underactive (<150 minutes/week of MVPA). All participants received a wearable fitness tracker (Fitbit Inspire HR); half were randomly assigned to also receive the multimedia intervention. Intervention components included a personally tailored website, personalized texting based on Fitbit data, and social media. The primary outcome was change in minutes of weekly MVPA from baseline to six months (6m), measured by ActiGraph accelerometers and the 7-Day Physical Activity Recall Interview. Changes in daily steps using Fitbit devices were also examined to test intervention efficacy. Results: Participants (N=160) were 15.3 years old on average, and mostly second generation in the U.S. For ActiGraph-measured MVPA, participants in the Intervention group (N=83) increased from a median of 0 min/week at baseline (IQR 26) to 64 min/week at 6m (IQR 28) compared to Control participants, who showed increases from a median of 0 at baseline (IQR 24) to 41 min/week at 6m (IQR 21) (p<0.05). Self-reported MVPA increased in the Intervention group from a median of 119 min/week at baseline (IQR 122.5) to 147 min/week at 6m (IQR 85) compared to Control participants, who showed increases from a median of 120 (IQR 186.25) at baseline to 124 min/week at 6m (IQR 69) (p<0.05). Steps also increased in both groups, with the Intervention group showing significantly greater increases (p<0.05). Conclusions: This intervention was successful in using a tailored technology-based strategy to increase MVPA in Latina adolescents and provides a promising approach for addressing a key health behavior. Given the scalable technology used, future studies should focus on broad scale dissemination to address health disparities. Clinical Trial: ClinicalTrials.gov NCT04190225 . Registered on November 20, 2019.
Background: Type 1 Diabetes (T1D) is one of the most common chronic conditions diagnosed during childhood. When a child is first diagnosed with T1D the parent is the primary manager of the condition,...
Background: Type 1 Diabetes (T1D) is one of the most common chronic conditions diagnosed during childhood. When a child is first diagnosed with T1D the parent is the primary manager of the condition, the responsibility begins to switch over to the child during adolescence. Resources to help children with T1D begin to learn about managing during pre-adolescence (8-12 years) will allow them to practice diabetes management skills. By applying serious game mechanisms to virtual reality (VR), it creates an opportunity for pre-adolescents with T1D to practice managing diabetes in a safe and virtual environment. Objective: The goal of this paper was to interview clinical staff to identify themes of T1D management skills for pre-adolescents and to utilize their expertise for designing a skill building VR game. Methods: We conducted 30-minute interviews with 9 clinical staff who manage pediatric patients with T1D to better understand their perspectives about the transition process and their experiences with engaging with diabetes technology. To identify common themes and ideas, the interview data was transcribed, and a pattern coding technique and thematic analysis were applied. Results: Three common themes emerged from the data. The first theme was that peers can influence medical adherence. Secondly, youth naturally seek autonomy and independence during the pre-adolescence years. Lastly, parental interactions impact transition style. Clinical staff suggested personalized gaming options, multi-functionality of avatars, skills, scenarios, and data sharing. Conclusions: Serious games for VR during the pre-adolescent years may allow youths to build a skill set and open conversations on how virtual reality technology can promote adherence to personalized treatment plans in pre-adolescent youth with T1D. Our results lend support that games should include a first-person avatar interacting with other characters, which would contribute components of autonomy and relatedness to the skill-building competency featured in existing T1D serious games.
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in cli...
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in clinical scenarios where guidelines do exist, these guidelines do not universally recommend one treatment option over others. As a result, the limitations of existing guidelines presumably create an inherent variability in provider decision-making and the corresponding distribution of provider behavioral variability in a clinical scenario, and such variability differs across clinical scenarios. We define this variability as a marker of provider uncertainty, where scenarios with a wide distribution of provider behaviors have more uncertainty than scenarios with a narrower provider behavior distribution. We propose four exploratory analyses of provider uncertainty: (1) field-wide overview; (2) subgroup analysis; (3) provider guideline adherence; and (4) pre-/post-intervention evaluation. We also propose that uncertainty analysis can also be used to help guide interventions in focusing on clinical decisions with the highest amounts of provider uncertainty and therefore the greatest opportunity to improve care.
Background: Wearables are increasingly used in pediatric cardiology for heart rate (HR) monitoring due to advantages over traditional heart rate monitoring, such as prolonged monitoring time, increase...
Background: Wearables are increasingly used in pediatric cardiology for heart rate (HR) monitoring due to advantages over traditional heart rate monitoring, such as prolonged monitoring time, increased patient comfort and ease of use. However, their validation in this population is limited. Objective: This study investigates the HR accuracy and validity of two wearables, the CardioWatch bracelet and Hexoskin shirt, in children attending the pediatric cardiology outpatient clinic. In addition, factors that influence HR accuracy, the Hexoskin shirt's arrhythmia detection efficacy, and patient satisfaction are investigated. Methods: Children indicated for a 24h-Holter ECG were equipped with a 24h-Holter ECG (gold standard), together with both wearables. HR accuracy was defined as percentage of HRs within 10% of Holter values and agreement was assessed using Bland-Altman analysis. Subgroup analyses were conducted based on body mass index (BMI), age and time of wearing, among other factors. A blinded pediatric cardiologist analysed Hexoskin shirt data for rhythm classification. Patient satisfaction was measured using a 5-point Likert-scale questionnaire. Results: Thirty-one participants (mean age 13.2±3.6 years; 45% female) and thirty-six (mean age 13,3±3,9) participants were included for the CardioWatch and Hexoskin measurements respectively. Mean accuracy was 84.8% (±8.7%) for the CardioWatch and 87.4% (±11.0%) for the Hexoskin shirt. Hexoskin shirt accuracy was notably higher in the first 12 hours (94.9±7.4%) compared to the latter 12 (80.0±16.7%, P<.001). Higher accuracy was observed at lower HRs (low vs. high HR: CardioWatch: 90.9±9.3% vs. 79.0±10.6%, P<.001; Hexoskin shirt: 90.6±14.0% vs. 84.5±11.8%, P<.001). Both wearables demonstrated good agreement in their HR measurement with Holter readings (CardioWatch bias: –1.4 beats per minute [BPM]; 95% Limits of Agreement [LoA]: –18.8 to 16.0. Hexoskin shirt bias: –1.1 BPM; 95% LoA: -19.6 to 17.4). Correct classification of the Hexoskin’s shirt rhythm recordings was achieved in 86% (31/36) of cases. Patient satisfaction scores (median[range]) were significantly higher for both the CardioWatch (3.8[3.5–4.3], P<.001) and Hexoskin shirt (3.7 [3.0–4.0], p<0.001) compared to the Holter ECG (2.6 [2.1-3.2]). Conclusions: The Corsano CardioWatch and Hexoskin shirt demonstrate good accuracy in pediatric HR monitoring and provide higher patient comfort than conventional monitoring. Both wearables show good agreement in relation to the gold standard device. However more research is needed to explore the reasons for inaccuracy during higher heart rates. The Hexoskin shirt also shows potential in arrhythmia detection. While further development is warranted, these wearables show promise in enhancing diagnostics, therapeutic monitoring and patient safety in pediatric cardiology.
Background: Digital health interventions based on self-management strategies aim to empower users’ self-reliance by utilizing self-monitoring, self-assessment and sensor-based output. The existing v...
Background: Digital health interventions based on self-management strategies aim to empower users’ self-reliance by utilizing self-monitoring, self-assessment and sensor-based output. The existing variety of digital devices utilizes a wide range of data sources and sensors to collect and monitor users’ output while little comparative data on parameter reliability and utility is available. Objective: This review aims to address the existing methodological and knowledge gap in understanding the efficient common parameters used among digital health interventions for depression that allow precise monitoring and prediction of the course of depression across different modes of digital intervention delivery. Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews digital databases including PubMed, Embase, Cochrane Library and Web of Science Core Collection were scoped for literature ranging from 2021 to 2024. A five-stage framework by Arksey and O’Malley (2005) was implemented to ensure systematic scoping of the literature. The quality of the retrieved studies was assessed using the Downs and Black Instrument and the Mixed Methods Appraisal Tool. Results: The overall five interdependent categories were defined including 1) Physical activity and Location, 2) Behavioural patterns, 3) Physiological data, 4) Sleep, and 5) Sociability and Self-reported assessments to best describe common assessment parameters across the literature. Eleven common clinical measures and self-report assessments were distinguished across defined categories as assessment combined with digital phenotyping methodology. Conclusions: Synthesis of result sections of the included studies indicated that predicting depressive symptoms by combining clinical assessment and digital phenotyping is a promising approach for further improvement of digital interventions. The overall strongest associations were found in combined approaches using parameters across categories combining sensor data and self-report assessment.
Background: Mycosis fungoides (MF) is the most prevalent type of cutaneous T-cell lymphoma, with a broad spectrum of clinical and histopathological variants. Among these, pigmented purpuric dermatosis...
Background: Mycosis fungoides (MF) is the most prevalent type of cutaneous T-cell lymphoma, with a broad spectrum of clinical and histopathological variants. Among these, pigmented purpuric dermatosis-like MF (PPD-like MF) is an extremely rare subtype that mimics benign pigmented purpuric dermatoses (PPD), posing diagnostic and therapeutic challenges. Objective: To comprehensively review the clinicopathological features, diagnosis, and treatment of PPD-like MF through an analysis of reported cases. Methods: To conduct a thorough review of the existing literature on pigmented purpuric dermatosis-like mycosis fungoides (PPD-like MF), a systematic and comprehensive search strategy was employed. The literature search was performed in August 2024, utilizing the electronic databases MEDLINE (via PubMed) and Google Scholar. Results: Fourteen studies encompassing 21 patients were identified. The mean age of patients was 33.9 years, with a male predominance (76.2%). Lesions predominantly affected the lower extremities (61.9%) and were characterized by erythematous macules, patches, petechiae, and purpuric lesions. Treatment responses varied, with phototherapy (PUVA) and methotrexate being the most effective modalities in the documented cases. Conclusions: PPD-like MF is a rare and challenging variant of MF, requiring a high index of suspicion and careful histopathological evaluation for diagnosis. Awareness of its distinct clinical and pathological features is essential for appropriate management.
Background: Active commuting, such as skateboarding and kickboarding, is gaining popularity as an alternative to traditional modes of transportation like walking and cycling. However, current activity...
Background: Active commuting, such as skateboarding and kickboarding, is gaining popularity as an alternative to traditional modes of transportation like walking and cycling. However, current activity trackers and smartphones, which rely on accelerometer data, are primarily designed to recognize symmetrical locomotive activities (e.g., walking, running) and may struggle to accurately identify the unique push-push-glide motion patterns of skateboarding and kickboarding. Objective: This study utilized machine learning techniques to evaluate the feasibility of classifying skateboard and kickboard commuting behaviors using data from wearable sensors and smartphones. A secondary objective was to identify the most important sensor-derived features for accurate activity recognition. Methods: Ten participants (4 women, 6 men; aged 12-55 years) performed nine activities, including skateboarding, kickboarding, walking, running, bicycling, ascending and descending stairs, sitting, and standing. Data were collected using wearable sensors (accelerometer, gyroscope, barometer) placed on the wrist, hip, and in the pocket to replicate the sensing characteristics of commercial activity trackers and smartphones. The signal processing approach included the extraction of a total of 211 features from 10- and 20-second sliding windows. Random forest classifiers were trained to perform multi-class and binary classifications, including distinguishing skateboarding and kickboarding from other activities. Results: Wrist-worn sensor configurations achieved the highest balanced accuracies for multi-class classification (84-88%). Skateboarding and kickboarding were recognized with sensitivities of 93-99% and 97-99%, respectively. Hip and pocket sensor configurations showed lower performance, particularly in distinguishing skateboarding (49-58% sensitivity) from kickboarding (78% sensitivity). Binary classification models grouping skateboarding and kickboarding into a "push-push-glide" superclass achieved high accuracies (91-95%). Key features for classification included low- and high-frequency accelerometer signals, as well as roll-pitch-yaw angles. Conclusions: This study demonstrates the feasibility of recognizing skateboard and kickboard commuting behaviors using wearable sensors, particularly wrist-worn devices. While hip and pocket sensors showed limitations in differentiating these activities, the broader "push-push-glide" classification achieved acceptable accuracy, suggesting its potential for integration into activity tracker software. Future research should explore sensor fusion approaches to further enhance recognition performance and address the question of energy expenditure estimation. Clinical Trial: N/A
Background: The population of young individuals not in employment, education or training (NEET) are highly diverse but a common problem appears to be their mental health. NEETs due to illness or disab...
Background: The population of young individuals not in employment, education or training (NEET) are highly diverse but a common problem appears to be their mental health. NEETs due to illness or disability are of particular concern for social exclusion but little is known of how NEET with and without disability make use of, and gain from, employment interventions. There is also a scarcity of research on psychological interventions and mental health outcomes on NEET individuals. Acceptance and commitment therapy (ACT) has shown promising results on psychological outcomes on young adults. Objective: The study aimed to expand the knowledge on effects of an app-based intervention built on ACT on NEETs with and without disability. Methods: A two-armed randomized controlled trial was conducted in 2021 including 151 NEET individuals 16-24 years. Participants were recruited mainly via social media platforms and through organizations working with NEET individuals. The intervention group (n=77) used an app for psychological well-being with possibility for digital group meetings for 6 weeks and the control group (n=74) received film clips once a week. Outcomes were self-assessed through questionnaires. Statistical analyses were made using Chi2, Mann-Whitney U-test, GLM and logistic regression. Results: No differences in effects on mental health were seen between intervention and control group, neither overall nor between NEET individuals with or without disability. Usage data show that 68.6% of the participants in the intervention group downloaded the app and 24.7% completed all six modules. Effects on employment and education levels were only seen within the intervention group where those that had completed one or more modules had higher likelihood of being active in terms of employment and education compared to those that did not complete modules. No significant effects were seen in employment and education levels in relation to disability status. A high proportion of the participants had a disability, few were in contact with a youth employment center and female participants were overrepresented in general. Participants with disabilities had lower self-esteem, had less frequently completed high school, fewer had work experience and a larger proportion had been in the NEET situation over a year. A higher drop-out were seen among participants in the intervention group and among male participants. Conclusions: No effects of the app-based intervention were seen for psychological well-being between NEET individuals with disability and those without, but the results showed potential effects on employment and education levels related to engagement in the intervention. NEETs with disability are of particular concern and might need additional efforts or other types of interventions than the one investigated herein. Findings can be considered weak due to the low adherence and high attrition. Clinical Trial: Registered on 12 February 2021 at ISRCTN (#ISRCTN46697028), https://doi.org/10.1186/ISRCTN46697028
Background: Despite stillbirth being the critical quality measure for care during pregnancy and childbirth, it is often overlooked especially amongst marginalized populations. Our study aims to add to...
Background: Despite stillbirth being the critical quality measure for care during pregnancy and childbirth, it is often overlooked especially amongst marginalized populations. Our study aims to add to the limited body of knowledge on stillbirth determinants and barriers to stillbirth data availability, in tribal populations. Objective: The study objectives are; 1) to determine the factors associated with stillbirth, 2) to review the stillbirth reporting system and identify existing barriers, and 3) to make recommendations to address the determinants and improve the stillbirth reporting system in the study area. Methods: A mixed-methods approach integrating aspects of, both, quantitative and qualitative designs is adopted for the study. The quantitative component will be a population-based, matched case-control study with case to control ratio of 1:2. A total of 450 participants i.e. 150 cases and 300 controls will be included. Cases will be the tribal women in age group 15-49 years who delivered a stillborn in last one year. Selection of cases will be based on WHO definition of late fetal deaths i.e. third trimester stillbirths at > 28 completed weeks of gestation. The controls will be the tribal women (15-49 years) who delivered a live baby irrespective of gestation period but during the similar time period. Both cases and controls will be selected randomly from all the six blocks of Jhabua district of Madhya Pradesh, India. The qualitative component will include four focused group discussions and 22 in-depth interviews with various stakeholders. The study has been approved by Research Advisory Board of IIHMR Delhi and is approved at participating study site too. Results: The data collection will take approximately three months and will start from February 2025. The study is scheduled from February 2025 to January 2026. Statistical analysis will be performed on collected data utilizing SPSS V.21.0. Univariate logistic regression will be performed for each independent variable to estimate crude odds ratio at 95% confidence interval. Sensitivity analysis will be carried out to assess the impact of missing data, if any. For qualitative data, we will use ATLAS.ti software to assign preliminary codes. Deductive approach will be utilized for development of themes. The findings of both, quantitative data and qualitative data will be integrated using a mixed-methods matrix. We plan to publish our results in a peer-reviewed journal and present our findings at academic conferences. Conclusions: The study is expected to generate evidence on the gravity of situation in the tribal population of Jhabua district. The critical findings of the study and exploration into association between variables will inform the targeted interventions and policy recommendations to enhance stillbirth surveillance and reporting systems in marginalized communities. The results will be instrumental in addressing data gaps and fostering equitable healthcare practices in resource-limited settings. Clinical Trial: NA
Background: Childhood obesity is a public health concern associated with serious health issues. The food environment, which in recent year undergone widespread changes leading to an increase access to...
Background: Childhood obesity is a public health concern associated with serious health issues. The food environment, which in recent year undergone widespread changes leading to an increase access to ultra-processed foods (UPFs), has been given attention in relation to development of childhood obesity. One part of the food environment is food marketing. Studies from around the world, including Sweden, show that the food marketing landscape is dominated by foods associated with negative health outcomes. However, in previous studies the investigated areas have been determined by researchers. Objective: The aim of this study was to test a new child centric methodology to further advance the understanding of the outdoor food advertisement landscape in Sweden. Methods: A cross sectional study was performed in two Swedish counties (Stockholm and Gävleborg). Initially, 45 students from four schools in areas with varying SES used a smartphone application (app) to take pictures of food advertisements that they encountered in their everyday lives. The app also recorded the GPS location of where the pictures were taken. Pictures with associated GPS-data were automatically uploaded and visualised in a secure cloud-based dashboard allowing for identification of areas where children see many food advertisements, so called “hotspot areas”. The identified hotspot areas were subsequently visited by two researchers who systematically mapped all the food advertisements in the areas using cameras. All pictures of food advertisements taken by the researchers in the hotspot areas were later analysed based on their content of UPFs, health promoting foods such as fruit, berries, vegetables and seafood (FBVS) as well as price promotions. Results: Based on 1310 pictures of food advertisements taken by the students, 34 hotspot areas were identified. A total of 2955 pictures of food advertisements were taken by the researchers in the hotspot areas during the mapping activity. The results of the picture analysis showed that 78 % of the advertisements contained UPFs and 21 % contained FBVS. Out of all food advertisement in all areas combined, 24% contained a price promotion. Out of all price promotions, 74 % advertised UPFs and 20 % advertised FBVS. Conclusions: This study showed that the vast majority of outdoor food advertisements in areas where children spend time advertise UPFs and only a 21 % advertise health promoting food such as FBVS. The findings continues to highlight that the food advertised in the Swedish outdoor environment is not in line with dietary guidelines and that it might be time to consider regulatory measures.
The management of human dissection labs and medical education are significantly impacted by the resurgence and spread of monkeypox (Mpox) as a global health issue, especially in Africa. Human dissecti...
The management of human dissection labs and medical education are significantly impacted by the resurgence and spread of monkeypox (Mpox) as a global health issue, especially in Africa. Human dissection, a crucial part of anatomical education, requires strict procedures to protect medical students and instructors from the spread of infectious diseases. This article highlights important hazards and biohazard concerns while examining the difficulties presented by the Mpox pandemic in the context of cadaveric dissection. Through a review of literature, institutional protocols and epidemiological data, we propose improved personal protective equipment (PPE) regulations and disinfection guidelines tailored for African medical facilities. This article highlights the need for capacity-building programs to equip educators and students with skills to manage infectious disease risks effectively. By tackling these challenges, we aim to advance medical education safely while contributing to discussions on public health emergency adaptations and fostering pandemic resilience.
Background: Endotoxin contamination in conventionally purified water poses serious risks to hemodialysis patients, leading to complications such as inflammation and sepsis. Addressing these risks is e...
Background: Endotoxin contamination in conventionally purified water poses serious risks to hemodialysis patients, leading to complications such as inflammation and sepsis. Addressing these risks is essential for enhancing patient safety and meeting global dialysis water quality standards. Advanced filtration technologies, such as titanium dioxide (TiO₂)-based nanoparticle filters, offer a promising approach to improve water purification processes in renal care. Objective: This study aimed to develop and evaluate the effectiveness of a TiO₂-based nanoparticle microporous filtration system for hemodialysis water purification. The objectives included analyzing the system's performance in reducing chemical contaminants (calcium, magnesium, aluminum, and lead) and microbiological contaminants (total viable count [TVC] and endotoxin units [EU]) across multiple renal centers. Methods: Water samples from three renal centers (RC1, RC2, and RC3) were analyzed pre- and post-filtration. TiO₂ nanoparticles were synthesized using the sol-gel method and characterized via Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy with Energy Dispersive X-ray analysis (SEM/EDX). The microporous filter, fabricated with TiO₂ nanoparticles, silicon dioxide, and polyethylene glycol (PEG), was tested for its ability to remove contaminants. Analytical techniques included spectroscopy for chemical analysis and microbiological assays for contaminant quantification. Results: Post-treatment analysis revealed significant reductions in chemical contaminants, with removal efficiencies averaging 78% for calcium, 80% for magnesium, 81% for aluminum, and 76.6% for lead across all centers. Microbiological contamination was also substantially reduced, with 78–80% removal of TVC and 76–84.6% reduction in EU levels. FTIR analysis confirmed the presence of hydroxyl groups critical for adsorption, while SEM/EDX characterization revealed a crystalline structure with a particle size of 1.45 nm, pore size of 4.11 μm, filter height of 2.56 mm, and bulk density of 0.58 g/cm³. Conclusions: The TiO₂-based nanoparticle filtration system demonstrated high efficacy in removing chemical and microbiological contaminants, significantly improving water quality for hemodialysis. These results highlight its potential as a practical solution for renal centers, especially in resource-constrained settings. Further studies are needed to evaluate its long-term performance and feasibility for widespread adoption. Renal centers should consider adopting TiO₂-based nanoparticle filters to address persistent water quality challenges. Pilot implementations across diverse settings can provide insights into operational feasibility. Additional research should explore scalability, maintenance requirements, and cost-effectiveness to optimize integration into healthcare systems. This study introduces a practical and innovative solution to improve hemodialysis water purification. By effectively reducing both chemical and microbiological contaminants, the TiO₂-based filtration system has the potential to enhance patient safety and outcomes, particularly in settings where maintaining high water quality standards remains challenging.
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy wh...
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy when charm fails. Sociopathy, characterized by manipulative tendencies and a lack of empathy, often manifests in adaptive yet harmful behaviors aimed at maintaining control and dominance.
Using the Deenz Antisocial Personality Scale (DAPS-24) to collect data from 34 participants, this study examines the prevalence and interplay of these dual strategies. Findings reveal that sociopaths employ glibness to disarm and manipulate, transitioning to aggressiveness in response to resistance. The implications for understanding sociopathic manipulation are discussed, emphasizing the importance of early detection and intervention in both clinical and social contexts.
Background: For adults living with type 1 diabetes (T1D), mental health support is limited. Peer support and digital health platforms are promising strategies to deliver mental health support to this...
Background: For adults living with type 1 diabetes (T1D), mental health support is limited. Peer support and digital health platforms are promising strategies to deliver mental health support to this population, particularly those from geographically marginalized communities. Mobile applications (apps), in particular, can enhance self-management and deliver support. Objective: We developed a novel mobile app, T1D REACHOUT, that delivers mental health support to adults with T1D. We aim to describe the iterative co-design and development process of the REACHOUT app, including its use in a pilot trial and subsequent revisions made before its evaluation in a randomized controlled trial (RCT). Methods: A co-creation approach was used to develop the REACHOUT app. An initial think tank and six focus groups were conducted with adults with T1D to better understand their support needs and identify requirements. Following this, we partnered with adults living with T1D, the “end users,” to iteratively co-design the REACHOUT app, enhancing usability and ensuring relevance. Adapting the open-source Rocket.chat platform to our specifications, we deployed the app in a single cohort pilot study. A network analysis of messages exchanged during the pilot study was performed to explore trends and patterns and to demonstrate implementation feasibility. Pilot study outcomes informed further refinement before implementation in an RCT. Results: Thirty-one focus group participants and 11 end-user partners participated in the development of REACHOUT. The current version of the REACHOUT app features six key components identified in the initial focus groups: a 24/7 community chat room (a customized group messaging function with threads), topic-specific discussion boards, a peer supporter library, peer supporter profiles for a user-driven matching process, small group virtual sessions, and direct (one-to-one) messaging. Forty-six participants were encouraged to use any or all of the features as frequently as they desired over a 5-month period during a pilot trial. During this time, 179 private small groups were created, and 10,410 messages were sent, including 1,389 chat room messages and 7,116 direct messages; among these were 3,446 messages exchanged between participants and their self-selected peer supporters. Conclusions: Key factors for successful implementation included (1) the co-design process involving comprehensive user engagement and (2) the opportunities realized through in-house development rather than contracting developers. The REACHOUT app offers a mechanism to access peer support for communities with limited mental health resources. Once validated in a prospective RCT, it may serve as a scalable mental health support intervention.
Background: Mood disorders, including bipolar disorder (BP) and major depressive disorder (MDD), are characterized by significant psychological and behavioral fluctuations, with mobility patterns serv...
Background: Mood disorders, including bipolar disorder (BP) and major depressive disorder (MDD), are characterized by significant psychological and behavioral fluctuations, with mobility patterns serving as potential markers of emotional states. Objective: Leveraging GPS data as an objective measure, this study explores the diagnostic and monitoring capabilities of Fourier transform, a frequency-domain analysis method, in mood disorders. Methods: A total of 62 participants (BP: 20; MDD: 27; healthy controls: 15) contributed 5,177 person-days of data over observation periods ranging from 5 days to 6 months. Key GPS indicators—location variance (LV), transition time (TT), and entropy (EN)—were identified as reflective of mood fluctuations and diagnostic differences between BP and MDD. Results: Fourier transform analysis revealed that the maximum power spectra of LV and EN differed significantly between BP and MDD groups, with BP patients exhibiting greater periodicity and intensity in mobility patterns. Notably, BP participants demonstrated consistent periodic waves (e.g., 1-day, 4-day, and 9-day cycles), while such patterns were absent in MDD. Daily GPS data showed stronger correlations with ecological momentary assessment (EMA)-reported mood states compared to weekly or monthly aggregations, emphasizing the importance of day-to-day monitoring. Depressive states were associated with reduced LV and TT on weekdays, and lower EN on weekends, indicating that mobility features vary with social and temporal contexts. Conclusions: This study underscores the potential of GPS-derived mobility data, analyzed through Fourier transform, as a non-invasive and real-time diagnostic and monitoring tool for mood disorders. The findings suggest that the intensity of mobility patterns, rather than their frequency, may better differentiate BP from MDD. Integrating GPS data with EMA could enhance the precision of clinical assessments, provide early warnings for mood episodes, and support personalized interventions, ultimately improving mental health outcomes. This approach represents a promising step toward digital phenotyping and advanced mental health monitoring strategies.
Background: Background: Increasing reliance on digital health resources can create disparities among older patients. Understanding health-related, mobility and socioeconomic factors associated with th...
Background: Background: Increasing reliance on digital health resources can create disparities among older patients. Understanding health-related, mobility and socioeconomic factors associated with the use of eHealth technologies is important for addressing inequitable access to healthcare. Objective: We sought to assess digital health literacy among patients aged ≥ 65 years and identify factors associated with their ability to access, understand, and use digital health resources. Methods: We conducted a cross-sectional survey including 871 patients aged ≥ 65 years. Analyses were performed to identify associations between digital health literacy and self-rated health, mobility and socioeconomic deprivation assessed with the area deprivation index (ADI). Results: Respondents with lower self-rated health had lower levels of digital health literacy with only 54.2% with poor self-rated health able to send a message to their doctor compared to 89.5% of patients with excellent self-rated health. All comparisons across the digital health literacy domains were statistically significant by self-rated health (P<0.05). Respondents with mobility restrictions had lower levels of digital health literacy across several domains with only 32.6% able to use a video/camera with their doctor compared to 48% without mobility restrictions (P=0.0010). Respondents with a high ADI (≥80%) also had lower levels of digital health literacy across several domains with only 57.4% able to send a message to their doctor compared to 80.2% without a high ADI. Conclusions: Our findings highlight the need for targeted interventions to improve engagement with eHealth among patients aged ≥ 65 years which is impacted by poor health, limited mobility, and socioeconomic deprivation. Enhancing digital health literacy can help bridge the gap in access to digital health resources and improve overall health outcomes for this population.
Background: Patients with mood or psychotic disorders experience high rates of unplanned hospital readmissions. Predicting the likelihood of readmission can guide discharge decisions and optimize pati...
Background: Patients with mood or psychotic disorders experience high rates of unplanned hospital readmissions. Predicting the likelihood of readmission can guide discharge decisions and optimize patient care. Objective: The purpose of this study is to evaluate the predictive power of structured variables from electronic health records (EHRs) for all-cause readmission across multiple sites within the Mass General Brigham (MGB) health systems and to assess the transportability of prediction models between sites. Methods: This retrospective, multi-site study analyzed structured variables from EHRs separately for each site to develop in-site prediction models. The transportability of these models was evaluated by applying them across different sites. The predictive performance was measured using the F1 score, and additional adjustments were made to account for differences in predictor distributions. Results: The study found that the relevant predictors of readmission varied significantly across sites. For instance, the length of stay was a strong predictor at only three of the four sites. In-site prediction models achieved an average F1 score of 0.666, whereas cross-site predictions resulted in a lower average F1 score of 0.551. Efforts to improve transportability by adjusting for differences in predictor distributions did not lead to better performance. Conclusions: The findings indicate that individual site-specific models are necessary to achieve reliable prediction accuracy. Furthermore, the results suggest that the current set of predictors may be insufficient for cross-site model transportability, highlighting the need for more advanced predictor variables and predictive algorithms to gain robust insights into the factors influencing early psychiatric readmissions.
Background: The impact of COVID-19 has primarily been studied in the context of language delays or developmental disorders in infants and children. However, the effects on young adults have received l...
Background: The impact of COVID-19 has primarily been studied in the context of language delays or developmental disorders in infants and children. However, the effects on young adults have received less attention. COVID-19 not only affects physical health but also cognitive and language functions, which is an emerging area of research. While previous studies have focused on developmental stages, the effects of COVID-19 on the language abilities of healthy young adults remain underexplored. This study aimed to investigate the impact of COVID-19 on the spoken language, particularly in story retelling and working memory, in young adults. Objective: The objective of this study was to assess the effects of COVID-19 on spoken language abilities, particularly story retelling and working memory, in young adults. The study sought to understand how COVID-19 might influence the spoken discourse abilities of young adults, and whether these effects are temporary or long-lasting. Methods: The study involved 77 young adult participants, of whom 39 were in the non-COVID group and 38 were in the COVID group. Participants underwent the Story Retelling Procedure (SRP) and working memory tests. The SRP test, which heavily relies on auditory comprehension and memory, was used to evaluate the impact of COVID-19 on spoken discourse. Working memory was also assessed to examine potential COVID-related disruptions in cognitive functions. Results: The results revealed a significant reduction in performance on the SRP test in the COVID group compared to the non-COVID group. The mean score for the COVID group was 5.67 (SD = 2.01), while the non-COVID group’s mean was 7.15 (SD = 1.78), with a statistically significant difference (p = 0.03). This suggests that COVID-19 had a negative impact on the ability to retell stories. However, no significant differences were found in working memory performance between the two groups (p = 0.45), indicating that working memory was not notably affected by COVID-19 in this sample. Conclusions: COVID-19 was found to negatively affect spoken discourse, particularly story retelling abilities, in young adults, although it did not impact working memory. The findings suggest that COVID-19 may cause temporary disruptions in language abilities in healthy young adults, with implications for future studies on long-term effects, particularly regarding long-COVID symptoms. Further research is needed to explore the lasting impact of COVID-19 on language processing, especially in individuals experiencing persistent symptoms.
Background: Wearable self-tracking technologies are increasingly recognized for their potential to enhance therapeutic engagement and personalize treatment. While many instruments emphasize passive da...
Background: Wearable self-tracking technologies are increasingly recognized for their potential to enhance therapeutic engagement and personalize treatment. While many instruments emphasize passive data collection, their role in actively mediating therapeutic processes remains underexplored. This study explores how the One Button Tracker (OBT), a novel single-purpose wearable self-tracking instrument, supports psychotherapeutic treatment by enabling patients to track self-defined, personally relevant phenomena during their daily lives. Objective: To explore how the OBT mediates the psychotherapeutic process in patients’ daily lives, focusing on its impact on therapeutic engagement, self-awareness, and the therapeutic relationship. Methods: This qualitative study was part of a larger Participatory Action Research project conducted at a specialized clinic for trauma survivors in Denmark. Nine patients, refugees diagnosed with Complex PTSD, used the OBT as part of their therapy. Semi-structured interviews were conducted at three stages: before, during, and after treatment. Thematic analysis was used to analyze the data, guided by postphenomenological framework focusing on technologies mediation of the human-world relations. Results: Thematic analysis identified five key themes describing the OBT’s multistable roles: (1) From external instrument to extension of the self (2) mental switch (3) a faithful companion (4) scarlet letter, and (5) emergency lifeline. The OBT supported engagement in therapeutic interventions during moments of distress, enhanced emotional regulation, and strengthened the therapeutic relationship by extending its influence beyond clinical sessions. Its simplicity and vibrotactile feedback facilitated engagement and usability, while its multistability allowed patients to adapt its use to their intentions and contexts. However, the presence and sometimes visibility of the OBT introduced complex social dynamics, amplifying both engagement and stigma depending on individual circumstances and context. Conclusions: The findings suggest that the OBT acts as an active mediator in the therapeutic process, fostering agency, emotional regulation, and engagement. By shifting the focus from passive data collection to meaningful interaction with the instrument, the OBT highlights the potential of wearable self-tracking instruments to actively shape therapeutic experiences. These insights underscore the value of designing digital mental health instruments that prioritize simplicity, multistability, and relational engagement to support personalized and context-sensitive care.
Background: Venous thromboembolism (VTE) is a common vascular disorder requiring extended anticoagulation therapy post-discharge to reduce recurrence risk. Home rehabilitation management systems that...
Background: Venous thromboembolism (VTE) is a common vascular disorder requiring extended anticoagulation therapy post-discharge to reduce recurrence risk. Home rehabilitation management systems that utilize electronic health records (EHR) from hospital care provide opportunities for continuous patient monitoring. However, transferring medical data from clinical to home settings raises significant concerns about privacy and security. Conventional methods such as manual data entry, optical character recognition, and dedicated data transmission lines face notable technical and operational challenges. Objective: The aim of this study is to develop a QR code-based secure transmission algorithm (QRST-AB) using Avro and Byte Pair Encoding (BPE). The algorithm facilitates the creation of out-of-hospital health records by enabling patients to scan QR codes via a dedicated mobile application, ensuring data security and user privacy. Methods: Between January and October 2024, 300 hospitalized VTE patients were recruited at the Sixth Medical Center of the Chinese PLA General Hospital. Post-discharge, participants used a home rehabilitation application tailored for VTE management. The QRST-AB algorithm was developed to securely transfer in-hospital EHR to the application. It incorporates cryptographic hash functions for authentication and employs BPE, Avro, and Gzip for optimized data compression. Specifically, BPE tokenizes medical text, while Avro serializes JSON objects, contributing to data encryption. A proprietary tokenizer was trained using a "Chinese Medical Text Dataset," and compression efficiency was evaluated using a "Performance Benchmark Dataset." Comparative analyses were conducted to assess the compression efficiency of JSON serialization methods, Avro and ASN.1, and tokenization algorithms, BPE and unigram. Results: The dataset consisted of JSON files from 300 patients, averaging 240.1 fields per file (range: 89–623) and 7,095 bytes in size (range: 2,748–17,425 bytes). Using the BPE + Avro + Gzip algorithm, the average file size was reduced to 1,048 bytes, achieving a compression ratio of 6.67. This was 1.82 times more efficient than traditional Gzip compression (average file size: 1,907 bytes; compression ratio: 3.66; P < 0.001). For Chinese medical text tokenization, BPE outperformed unigram with a compression ratio of 4.68 versus 4.55 (P < 0.001). Avro and ASN.1 demonstrated comparable compression ratios of 2.57 and 2.59, respectively, when used alone (P = 0.299). However, Avro combined with BPE and Gzip significantly outperformed ASN.1, achieving compression ratios of 6.67 versus 5.21 (P < 0.001). Additionally, 84.7% of patients needed to scan only one QR code, requiring an average of 3.1 seconds. Conclusions: The QRST-AB algorithm efficiently compresses and transmits data in an encrypted manner and authenticates the identity of the scanning users, ensuring the privacy and security of medical data. Delivered as a software development kit, the algorithm offers straightforward implementation and usability, supporting its broad adoption across various applications.
Background: The rising prevalence of chronic diseases among older adults in China highlights the need for a more robust and efficient healthcare system. The existing system, characterized by fragmenta...
Background: The rising prevalence of chronic diseases among older adults in China highlights the need for a more robust and efficient healthcare system. The existing system, characterized by fragmentation and limited coordination, faces challenges in delivering comprehensive care for chronic diseases among community-dwelling older adults with multiple comorbidities. There is a pressing need for tailored and integrated care for chronic conditions that promotes resource sharing, enhances access to advanced facilities, offers expert guidance, and ensures safe and effective management. Objective: The objectives are to investigate the unmet healthcare needs of Chinese community-dwelling older adults, explore the acceptability of the PRISMA model, and examine their needs for integrated care by case managers. Additionally, the study seeks to develop a comprehensive questionnaire to assess general and specific expectations, analyze expectation levels, identify sociodemographic factors influencing these expectations, and ultimately formulate an evidence-based integrated care model tailored to optimize healthcare delivery for ageing population. Methods: An exploratory sequential mixed-methods approach, including three sequential phases, incorporates elements from the PRISMA integrated care model and considers specific expectations of community-dwelling older adults with multiple comorbidities. Phase I involves a qualitative study to gather in-depth evidence on healthcare needs and integrated care expectations. Phase II focuses on developing and validating a comprehensive questionnaire. Phase III comprises a quantitative survey conducted across three cities representing central, eastern, and western China. Data integration will follow a data-building approach, combining qualitative and quantitative findings in the final analysis to provide a comprehensive understanding and refine insights into expectations towards integrated care for community-dwelling older adults. Results: The data collection process for this study will begin in October 2025. The duration of the study is planned to be 24 months. Ethical approval has been obtained from the Institutional Ethics Committee. Conclusions: This study aims to address significant gaps in the current healthcare provision while improving the quality, accessibility, and efficiency of services. By exploring how integrated care can be facilitated through a centralized point of access managed by a case manager, it seeks to enhance community care. The findings have the potential to inform policy decisions, guide the implementation of integrated care delivery, and ultimately improve health outcomes and the quality of life for older adults in China. Clinical Trial: Protocol Registration:
The study protocol has been registered on osf.io
(Registration DOI: https://doi.org/10.17605/OSF.IO/825AH).
Background: Sexually transmissible infections (STIs) typically concentrate in core areas, or risk spaces, that can be defined geographically. These infections co-existence with HIV may result in sever...
Background: Sexually transmissible infections (STIs) typically concentrate in core areas, or risk spaces, that can be defined geographically. These infections co-existence with HIV may result in severe health complications. Objective: (a) determining the seroprevalence of hepatitis B, syphilis and chlamydia infections among people living with HIV (PLWHIV) from the rural and urban communities, (b) evaluate the effect of co-morbidity of STIs on Hematological parameters among PLHIV infection within these communities, (c) assess the factors exacerbating the morbidity of sexually transmissible diseases among PLWHIV and develop context-specific policy recommendations to facilitate the mitigation of socio-cultural and (d) socio-economic drivers of the morbidity of these infections in the rural and urban settings in Meme division Methods: A hospital-based cross-sectional design was adopted that will recruit a minimum of 178 PLWHIV from within the urban and rural communities in Meme division from December 2024 to March 2025. Data will be collected using well-structured questionnaires with the help of kobo tool box software and about 4mL of blood will be collected from which syphilis, hepatitis B, chlamydia, full blood count and ABO blood grouping serological assays will be done. Data will be analyzed using STATA and GraphPad prism statistical packages and p-values <0.05 will be considered as statistically significant. Results: The seroprevalence of syphilis, hepatitis and chlamydia will be determined among PLHIV from rural and urban communities, association between the different WBCs parameters and the occurrence of STIs will be determined, factors exacerbating the spread of these infections among PLHIV from the rural and urban communities will be determined, and context-specific policy recommendations to facilitate the mitigation of socio-cultural and socio-economic drivers of the morbidity of these infections in the rural and urban settings will be developed. Conclusions: The prevalence of STIs among PLWHIV is high and this has an effect on hematological parameters with factors such as multiple sexual partners, age, HIV status, dental procedures outside health facilities exacerbating these infections morbidity.
Background: Social determinants of health (SDOH) are the conditions in which people are born, grow, live, work, and age, encompassing social and economic factors that shape health outcomes. There is a...
Background: Social determinants of health (SDOH) are the conditions in which people are born, grow, live, work, and age, encompassing social and economic factors that shape health outcomes. There is an increasing call to leverage digital health technology (DHT) to address SDOH and health-related social needs and establish connections to resources and services. Objective: This study aimed to: 1) identify the DHT-related characteristics of DHT users with low socioeconomic status (SES), 2) determine the needs and preferences of DHT users with low SES, and 3) explore how current SDOH-DHT address these needs and preferences. Methods: We employed a multi-phase, mixed-method, user-centered design approach. In Phase 1, we developed a user profile based on a literature review, aggregate data, interviews with 26 low-SES individuals, and focus groups with 28 professionals. In Phase 2, we conducted a landscape analysis of 17 existing SDOH-DHT. Results: DHT users of low SES had diverse social and technology characteristics. Five key themes emerged regarding user needs and preferences: 1) user-centered design, including multilingual support, visual guidance, and customization; 2) efficient, solution-based assessment of social risks, assets, and needs; 3) e-caring support features; 4) user education and feedback mechanisms; and 5) trust, privacy, and security. The landscape analysis revealed that current SDOH-DHT features do not adequately meet these needs. Conclusions: Discrepancies between target user needs and current DHT features represent missed opportunities in developing user-centered tools for individuals of low SES. Findings underscore the importance of inclusive, empowering, and responsive design in SDOH-DHT to bridge health disparities and advance public health.
Background: Early assessment of mild cognitive impairment (MCI) in older adults is crucial, as it enables timely interventions and decision-making. In recent years, researchers have been exploring the...
Background: Early assessment of mild cognitive impairment (MCI) in older adults is crucial, as it enables timely interventions and decision-making. In recent years, researchers have been exploring the potential of gamified interactive systems (GIS) to assess pathological cognitive decline. Yet, researchers are still investigating effective methods for system integration, designing GIS that are perceived as engaging whilst also improving the accuracy in assessing cognitive decline. Objective: This review aims to comprehensively investigate GIS used to assess MCI. Specifically, we reviewed the existing systems to understand the different game types (including genres and interaction paradigms) employed for assessment. Additionally, we examined the cognitive functions targeted. Finally, we investigated the evidence for the performance of assessing MCI through GIS by looking at the quality of validation for these systems in assessing MCI and the diagnostic performance reported. Methods: A systematic search was conducted in IEEE Xplore, ACM Digital Library, and Scopus to identify interactive gamified systems developed for assessing MCI. Game types were categorized according to genres and interaction paradigms. The cognitive functions targeted by the systems were compared with those assessed in the MoCA. Lastly, we examined the quality of validation on ground truth, relevance of controls, and sample size. The diagnostic performance on sensitivity, specificity, and AUC are reported. Results: A total of 81 papers covering 49 GIS were included in this review. The primary game types used for MCI assessment were classified as casual games (n = 30), simulation games (n = 17), full-body movement games (n=4), and dedicated interactive games (n = 3). Only six out of 49 systems assess cognitive functions comprehensively, compared to those functions assessed via the MoCA. The reported diagnostic performances of GIS were comparable to common screening instruments like MMSE and MoCA, with some systems reporting near-perfect performance (100% sensitivity and specificity). Conclusions: This review provides a comprehensive summary of the literature on GIS for assessing MCI, explores the cognitive functions assessed by these systems, and evaluates their diagnostic performance. The results indicate that current GIS hold significant promise for the assessment of MCI, with several systems demonstrating diagnostic performance comparable to established screening tools. However, these systems' model training and validation exhibited significant deficiencies. Hence, despite some systems reporting impressive performance, there is a need for improvement in the validation studies, particularly concerning sample size and methodological rigor. Finally, we advocate for increased longitudinal research to enhance the reliability of these systems in evaluating MCI.
Background: Atopic dermatitis (AD) is a chronic, relapsing skin condition that significantly impact patients' quality of life. In clinical practice, AD is commonly managed through the use of emollient...
Background: Atopic dermatitis (AD) is a chronic, relapsing skin condition that significantly impact patients' quality of life. In clinical practice, AD is commonly managed through the use of emollients and topical corticosteroids. Haidebao body lotion (HBL) with the incorporation of Calcium-based antimicrobial peptide compounds (CAPCS) has demonstrated clinical benefits for patients with mild AD, but lack of high quality clinical trial evidence. Objective: In this study, we will implement a multi-center, double blind, randomized and placebo controlled trial to evaluate the efficacy and safety of HBL incorporated with CAPCS as an adjunctive therapy in ameliorating mild AD. Methods: This multi-center, randomized, double blind, placebo controlled trial will recruit 200 eligible participants in ten hospitals in China from October, 2023 to October, 2025. In this study, AD is confirmed in accordance with the Williams diagnostic criteria, and AD patients aged 18-55 years with the signed informed consent forms will be recruited. However, AD patients with pregnancy, serious underlying diseases, with communication barriers, and the violation of medication regulations will be excluded. In this study, 200 AD patients will be randomly assigned (1:1) to the treatment group (HBL with CAPCS, n=100) and the control group (HBL without CAPCS, placebo, n=100), and each participants in both groups will receive 3 sessions of treatments per day for 4 weeks. The primary outcome is the proportion of patients who has achieved at least 60% improvement in eczema area and severity index (EASI) score from baseline to week 2. The secondary outcomes include the numeric rating scale (NRS), dermatology life quality index (DLQI) at week 2 and week 4, and the adherence and adverse events will also be recorded. The full analysis set (FAS) and perprotocol set (PPS) will be analyzed by SAS 9.3 software package, and a P value less than 0.05 is considered as statistically significant. Results: This study is reviewed and approved by the Institutional Ethics Review Committee of Shanghai Skin Diseases Hospital in 2023 (2023-33), the participant recruitment work begins in January, 2024 and is proposed to be finished in December, 2024. This study was submitted for registration in Chinese Clinical Trial Registry on May 8, 2024, and approved on July 24, 2024. The registration number is ChiCTR2400087274. The study will be conducted in strict accordance with the Declaration of Helsinki. Conclusions: This study will evaluate the clinical efficacy and safety of HBL incorporated with CAPCS in the treatment of patients with mild AD. If the treatment efficacy is proven, HBL incorporated with CAPCS could be clinically used as an adjunctive therapy in ameliorating mild AD.
Background: Scores and prediction models, such as the MESS score for trauma, and the Wifi classification for diabetic foot ulcers, help in the decision-making process of amputation. However, they can...
Background: Scores and prediction models, such as the MESS score for trauma, and the Wifi classification for diabetic foot ulcers, help in the decision-making process of amputation. However, they can be subjective as they depend on the experience of the medical staff applying them. Objective: Assess the impact of temperature measurement using infrared thermal imaging in extremities salvage. Methods: We included 29 patients who sought a second opinion after an amputation recommendation, infrared thermographic images were acquired to measure the temperature differences (ΔT) between the injured and uninjured limbs. For the saved limbs, we provided clinical follow until 12 weeks. Results: Of the patients enrolled in the study, 27 limbs were salvaged, thermographic images allowed the discrimination of two groups: the first group of 18 patients with negative deltas, ΔT -3.6°C ± 1.99, and a second group of 9 patients with positive deltas, ΔT of 3.36°C ±2.71. None of the groups had a progression to enlargement of their delta in the first 5 days, and at the twelfth week approached to ΔT of 0°C at wound closure. For the two patients who required amputation, one patient showed an initial negative ΔT of -4.3°C, which worsened to -5°C by the fifth day, the other patient showed an initial negative of -4.5°C, which worsened to -5.8°C by the fifth day. Conclusions: Digital infrared thermography is a tool that can help guide limb salvage in patients with uncertain clinical diagnoses. This imaging modality allows visualization of thermal differences and patterns derived from thermal changes in patients at risk of limb amputation. Clinical Trial: This study was approved under registry 08-23 by the Research Ethics Committee of the Hospital Central “Dr. Ignacio Morones Prieto” (CONBIOÉTICA-24-CEI-001-20160427).
Background: The early detection of pre-symptomatic individuals and the proactive implementation of health guidance through regular primary care visits are essential strategies for the secondary and te...
Background: The early detection of pre-symptomatic individuals and the proactive implementation of health guidance through regular primary care visits are essential strategies for the secondary and tertiary prevention of diabetic complications. An interdisciplinary team approach significantly enhances the care of patients with diabetes, integrating the expertise of physicians, dietitians, clinical navigators, pharmacists, and mental health professionals. Central to this collaborative model is the active participation of patients, who play a vital role in managing their health outcomes. This integrated approach facilitates comprehensive care, promoting better health management and improved quality of life for individuals with diabetes. Objective: We aimed to evaluate the association among regular primary care visits, hemoglobin A1C (HbA1C) and low-density lipoprotein (LDL) levels in patients with type 2 diabetes mellitus. Methods: We randomly sampled data from 200 patients’ electronic medical records. Mann–Whitney and chi-square tests were used to investigate the association between glycemic control lipid profile and the number of patient visits. Results: The mean age of the participants was 61.78 years and the average body mass index was 34.5 kg/m2. Females constituted 61.79% of participants. The predominant race seen at the clinic was Black (43.8%), followed by White (42.69%). Patient adherence to scheduled visits was not statistically significantly associated with either HbA1C or LDL (chi-square = 1.1, p-value = 0.29 for HbA1c and chi-square = 1.12, p-value = 0.99 for LDL). Conclusions: In the sample studied, no statistically significant association existed between adherence to primary care visits and either HbA1C or LDL levels. This data can guide physicians to invest on favoring high-quality primary care contact rather than high frequency of visits. Clinical Trial: IRB approved
Background: The increasing reliance on the internet for health information necessitates understanding various factors influencing health information-seeking behaviors and satisfaction levels among use...
Background: The increasing reliance on the internet for health information necessitates understanding various factors influencing health information-seeking behaviors and satisfaction levels among users. These insights can inform strategies to improve the quality and accessibility of health information. Objective: This study aimed to investigate the socio-demographic factors affecting internet health information-seeking behaviors, the types of health information sought, the timeliness and trust associated with information sources, and user satisfaction regarding online health information. Methods: A quantitative cross-sectional survey was conducted among 376 participants, utilizing structured questionnaires to collect data on various aspects of health information-seeking behavior. Statistical analyses, including Chi-square tests and frequency distributions, were performed to evaluate the relationships between socio-demographic factors and health information-seeking behaviors. Results: The findings revealed significant associations between the duration of teaching, health insurance status, estimated income, and the duration of employment with health information-seeking behaviors (p < 0.05). The most sought-after health information types included specific medical conditions and treatment methods. Satisfaction levels varied across categories, with a majority of respondents expressing positive sentiments toward online searches, website information sources, and the usefulness of the information received. Conclusions: The study underscores the importance of socio-demographic factors in shaping health information-seeking behaviors and highlights the need for improved credibility and trust in online health information sources. Stakeholders in health communication should prioritize the development of reliable online health information platforms and enhance user education on navigating these resources effectively. This study contributes valuable insights into the dynamics of health information-seeking behaviors, emphasizing the critical role of socio-demographic factors and the need for high-quality, trustworthy health information in promoting informed health decisions.
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, begi...
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, beginning with
memorialization of the fallen and later to honor aging veterans, commemoration of significant
anniversaries associated with the conflict, memorialization sites of conflict, and celebration of
the actions of military leaders. Sources reveal that during both the Jim Crow and Civil Rights
eras, many were erected as part of an organized propaganda campaign to terrorize African
American communities and distort the past by promoting a ‘Lost Cause’ narrative. Through
subsequent decades, to this day, complex and emotional narratives have surrounded interpretive
legacies of the Civil War. Instruments of commemoration, through both physical and digital intervention approaches, can be provocative and instructive, as the country deals with a slavery legacy and the commemorated objects and spaces surrounding Confederate inheritances.
Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question. Objective: I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Methods: This article addresses several elements within the purview of the Journal: questions of contemporary society, diversity of opinion, recognition of complexity, subject matter of interest to non-specialists, international relevancy, and history. Drawing from the testimony of scholars and artists, I address the contemporary conceptual landscape of approaches to the presentation and evolving participatory narratives of Confederate monuments that range from absolute expungement and removal to more restrained responses such as in situ re-contextualization, removal to museums, and preservation-in-place. In a new era of social justice surrounding the aftermath of dramatic events such as the 2015 Charleston shooting, the 2017 Charlotteville riot, and the murder of George Floyd, should we see them as symbols of oppression, inviting expungement, or selectively as sites of conscience and reflection, inviting various forms of re-interpretation of tangible and intangible relationships?
I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Results: I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Conclusions: Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question.
Background: The COVID19 pandemic has caused a large number of infections and fatalities, causing administrations
at various levels to limit public mobility. This paper analyzes the complex associatio...
Background: The COVID19 pandemic has caused a large number of infections and fatalities, causing administrations
at various levels to limit public mobility. This paper analyzes the complex association between the
stringency of restrictions, public mobility, and reproduction rate (R-value) on a national level for Germany. Objective: The goals were to analyze; a) the correlation between government restrictions and public mobility and b)
the association between public mobilities and virus reproduction. Methods: In addition to correlations, a Gaussian
Process Regression Technique is used to fit the interaction between mobility and R-value. Results: The main
findings are that: (i) Government restrictions has a high association with reduced public mobilities,
especially for non-food stores and public transport, (ii) Out of six measured public mobilities, retail,
recreation, and transit station activities have the most significant impact on COVID19 reproduction rates. Conclusions: A mobility reduction of 30% is required to have a critical negative impact on case number dynamics,
preventing further spread.