Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
JMIR Preprints
A preprint server for pre-publication/pre-peer-review preprints as well as ahead-of-print (accepted) manuscripts
Background: Current acute stroke management guidelines focus primarily on time-based imaging windows and the pharmacological suppression of acute hypertension. This paper proposes an alternative paradigm based on a first-principles theoretical derivation of hydraulic states. Objective: The primary goal of this framework is to establish the physiological feasibility of a "Stability Corridor" for Cerebral Perfusion Pressure (CPP) to maximize neuronal salvage. Methods: The methodology utilizes a first-principles biophysical derivation incorporating the Monro-Kellie Doctrine and Laplace's Law. It modifies the Cushing Reflex sequence, framing the terminal rise in intracranial pressure (ICP) as a result of a systemic blood pressure spike driven by ischaemic vasoparalysis. Results: The derivation identifies two phases of hydraulic failure: a "Masked Influx" where the 0.05 alpha extracellular space (ECS) buffer is exhausted, followed by a "Terminal Spike" in ICP. It establishes a "Stability Corridor" by identifying the Ischaemic Floor for collateral flow and the Elastic Limit to prevent vascular tearing. Conclusions: By modulating ICP to keep CPP within the Stability Corridor and using GFAP biomarkers as proxies for hydraulic integrity, clinicians can theoretically maintain cerebral perfusion and prevent the "Hydraulic Breach" of macro-haemorrhage. Clinical Trial: N/A (Theoretical Paper)
Journal Description
Welcome to JMIR's own preprint server. It includes preprints from JMIR authors who have opted-in to preprinting their article when submitting, and preprints from non-JMIR authors.
JMIR Preprints is a preprint server and "manuscript marketplace" with manuscripts that are intended for community review. Great manuscripts may be snatched up by participating journals which will make offers for publication.There are two pathways for manuscripts to appear here: 1) a submission to a JMIR or partner journal, where the author has checked the "open peer-review" checkbox, 2) Direct submissions to the preprint server.
For the latter, there is no editor assigning peer-reviewers, so authors are encouraged to nominate as many reviewers as possible, and set the setting to "open peer-review". Nominated peer-reviewers should be arms-length. It will also help to tweet about your submission or posting it on your homepage.
For pathway 2, once a sufficient number of reviews has been received (and they are reasonably positive), the manuscript and peer-review reports may be transferred to a partner journal (e.g. JMIR, i-JMR, JMIR Res Protoc, or other journals from participating publishers), whose editor may offer formal publication if the peer-review reports are addressed. The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
For pathway 2, if authors do not wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter. Also, note if you want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc), please specify this in the cover letter.
Manuscripts can be in any format. However, an abstract is required in all cases. We highly recommend to have the references in JMIR format (include a PMID) as then our system will automatically assign reviewers based on the references.
Background: Current acute stroke management guidelines focus primarily on time-based imaging windows and the pharmacological suppression of acute hypertension. This paper proposes an alternative parad...
Background: Current acute stroke management guidelines focus primarily on time-based imaging windows and the pharmacological suppression of acute hypertension. This paper proposes an alternative paradigm based on a first-principles theoretical derivation of hydraulic states. Objective: The primary goal of this framework is to establish the physiological feasibility of a "Stability Corridor" for Cerebral Perfusion Pressure (CPP) to maximize neuronal salvage. Methods: The methodology utilizes a first-principles biophysical derivation incorporating the Monro-Kellie Doctrine and Laplace's Law. It modifies the Cushing Reflex sequence, framing the terminal rise in intracranial pressure (ICP) as a result of a systemic blood pressure spike driven by ischaemic vasoparalysis. Results: The derivation identifies two phases of hydraulic failure: a "Masked Influx" where the 0.05 alpha extracellular space (ECS) buffer is exhausted, followed by a "Terminal Spike" in ICP. It establishes a "Stability Corridor" by identifying the Ischaemic Floor for collateral flow and the Elastic Limit to prevent vascular tearing. Conclusions: By modulating ICP to keep CPP within the Stability Corridor and using GFAP biomarkers as proxies for hydraulic integrity, clinicians can theoretically maintain cerebral perfusion and prevent the "Hydraulic Breach" of macro-haemorrhage. Clinical Trial: N/A (Theoretical Paper)
Background: Artificial intelligence (AI) is increasingly shaping health care, yet the AI preparedness of midwifery students remains underdocumented. Evidence is needed to inform midwifery-specific cur...
Background: Artificial intelligence (AI) is increasingly shaping health care, yet the AI preparedness of midwifery students remains underdocumented. Evidence is needed to inform midwifery-specific curriculum development and to clarify how students understand and operationalize AI in training and placements. Objective: This mixed-methods study aimed to assess French midwifery students’ AI readiness, training needs, and ethical/regulatory concerns. Methods: We conducted a national sequential explanatory mixed-methods study during the 2024–2025 academic year. A web-based survey (five previously translated/adapted questionnaires) was disseminated via midwifery schools/universities in France (30/33, 91%, institutions confirmed dissemination and responses were received from these 30 institutions). Eligible participants were students enrolled in years 2–5 of the French midwifery curriculum. We computed mean theme scores (1–5) with 95% confidence intervals (CIs) and assessed internal consistency using Cronbach α. Analyses were restricted to fully completed questionnaires. Semi-structured interviews were conducted with volunteer students from one midwifery school in Eastern France (n=8), transcribed verbatim, anonymized, and analyzed using thematic analysis. Mixed-methods integration used a joint display. Results: Of 414 survey entries, 190 were fully completed and kept for analysis (190/414, 46.1%). Mean theme scores for AI skills and knowledge were below the neutral midpoint, ranging from 1.20 (95% CI 1.09–1.23) for familiarity with advanced AI techniques to 2.89 (95% CI 2.48–3.31) for analytical concepts in AI for health. Perceived ability to use AI for clinical purposes was low (2.05, 95% CI 1.01–3.09). In contrast, students strongly endorsed AI education (belief that students and professionals should be trained: 4.11, 95% CI 3.96–4.26) and emphasized evidence and safety requirements (up to 4.04, 95% CI 3.84–4.24). Item-level results suggested “AI label ambiguity”: general AI familiarity showed higher agreement (91/190, 47.9%) than familiarity with specific concepts such as machine learning (20/190, 10.5%) or deep learning (13/190, 6.8%). Interviews aligned with these patterns, indicating rare exposure to explicitly identified AI-supported workflows in placements and describing mainly academic and informal uses of generative tools. Participants emphasized patient safety, accountability, and preservation of human judgment. Conclusions: French midwifery students report a substantial AI readiness gap characterized by both low technical preparedness and limited situated exposure during placements, despite strong demand for training and high salience of safety and governance. Findings support implementing a structured, progressive curriculum linked to midwifery-relevant clinical scenarios and aligned with placement ecosystems. Future measurement should explicitly distinguish generative AI practices from regulated clinical AI systems and capture safe-use behaviors to improve construct validity.
Background: The efficacies of various teaching methods in improving pharmacy students’ qualities and skills remain unclear. Objective: We aimed to compare and rank teaching methods by quantifying in...
Background: The efficacies of various teaching methods in improving pharmacy students’ qualities and skills remain unclear. Objective: We aimed to compare and rank teaching methods by quantifying information from randomized controlled trials. Methods: A systematic literature search of PubMed, Web of Science, China National Knowledge Infrastructure, and Chinese Wanfang Database was performed from the date of inception of the databases to November 2025. Our primary outcomes were the proportion of satisfaction and core competencies measured with survey questionnaires. Results: 144 trials comprising 18793 students allocated to one of 35 teaching methods were included. Problem-based learning combined with web-based learning (PBL+WBL) has the highest probability of being the best teaching method in improving the proportion of satisfaction (the surface under the cumulative ranking curve [SUCRA]=84.19%) and mastery of knowledge (SUCRA=77.87%) of pharmacy students. Problem-based learning combined with scenario simulation (PBL+SS) was most likely to enhance learning interest (SUCRA=98.39%) and teamwork ability (SUCRA=99.17%) in pharmacy education. In addition, team-based learning (TBL) was the most efficacious in increasing self-learning ability (SUCRA=81.46%), problem-solving ability (SUCRA=94.98%), and theoretical scores (SUCRA=91.92%). Conclusions: Our study indicates that emerging teaching methods represented by TBL, PBL+WBL, and PBL+SS are more effective in pharmacy education. Nevertheless, potential publication bias deserves careful consideration. Future research should use larger sample sizes and more rigorous methods to support these findings.
Background: Scaling lung cancer screening from controlled trials to nationwide implementation requires interoperable digital infrastructure capable of coordinating primary care, radiology, pulmonology...
Background: Scaling lung cancer screening from controlled trials to nationwide implementation requires interoperable digital infrastructure capable of coordinating primary care, radiology, pulmonology, and centralized governance. Although low-dose computed tomography (LDCT) reduces lung cancer mortality in high-risk populations, few countries have embedded screening programs directly within national health information systems to enable standardized workflows, real-time monitoring, and data-driven quality control. Objective: To describe the digital architecture, interoperability framework, and real-world performance of the Croatian National Lung Cancer Screening Program (CNLCSP), implemented as a native extension of the Central Health Information System of the Republic of Croatia (CEZIH). Methods: This retrospective observational implementation study analyzed structured program data collected between October 2020 and December 2025. The CNLCSP targets individuals aged 50–75 years with ≥30 pack-years of smoking history who are current smokers or former smokers who quit within 15 years. The program operates entirely within CEZIH through role-specific modules for general practitioners (GPs), radiologists, pulmonologists, and national coordinators. Core digital functionalities include electronic eligibility verification, paperless referral and scheduling, structured radiology and pulmonology reporting based on modified I-ELCAP guidelines, AI-assisted volumetric nodule analysis integrated into the reporting workflow with mandatory radiologist second reading, secure DICOM-based telemedicine image transfer, and a centralized analytics module providing real-time dashboards of predefined quality indicators, including radiation dose metrics. Results: From October 2020 to December 2025, over 54,000 individuals were screened, generating more than 80,000 LDCT examinations across 27 radiology centers and 6 pulmonology centers, involving more than 2,000 GPs. Positive radiological findings were reported in 4.45% of examinations. Continuous digital monitoring supported a mean effective radiation dose of 0.85 mSv, below the program limit of 1.5 mSv. The interoperable CEZIH-based infrastructure enabled expansion from 16 to 27 radiology centers while maintaining standardized reporting and centralized oversight. Conclusions: Embedding lung cancer screening as a native component of a national health information system enables scalable implementation, structured data capture, AI-supported clinical workflows with human oversight, and real-time governance. The Croatian model illustrates how digital integration within existing health infrastructure can support population-level screening and may serve as a transferable informatics framework for other health systems.
Background: Short message service (SMS) reminders are among the most widely implemented mobile health (mHealth) interventions used to improve outpatient appointment adherence and optimize healthcare s...
Background: Short message service (SMS) reminders are among the most widely implemented mobile health (mHealth) interventions used to improve outpatient appointment adherence and optimize healthcare service delivery. Although substantial evidence demonstrates their effectiveness in reducing missed appointments, limited research has examined how the design and framing of SMS reminder messages are evaluated within healthcare systems, particularly by personnel responsible for implementing these communication tools in routine clinical workflows. Objective: This study investigated healthcare service personnel’s preferences for SMS appointment reminder content and examined structural and experiential factors associated with message evaluation in a high-volume outpatient care system. Methods: A cross-sectional questionnaire study was conducted at a tertiary medical center in Taiwan with more than 1.2 million outpatient visits annually. Healthcare personnel involved in appointment coordination, diagnostic scheduling, and telehealth follow-up workflows evaluated six SMS reminder message formats representing distinct communication framings. Among 410 distributed questionnaires, 356 responses were returned and 322 complete responses were included in the analysis. Descriptive statistics, bivariate analyses, and multivariable logistic regression models were performed. To control for multiple comparisons, the Benjamini–Hochberg false discovery rate procedure was applied. Results: Informational SMS reminders were the most preferred format (52.2%), followed by reminders referencing prior missed appointment behavior (25.8%). Empathy-oriented messages were substantially less preferred, including patient-focused (7.6%) and provider-focused (5.3%) approaches, while physician follow-up (4.8%) and supportive reminders (4.2%) were least selected. After false discovery rate adjustment, age, education level, occupational category, and outpatient visit frequency remained significant in bivariate analyses. In multivariable models, age emerged as the only consistent predictor of SMS preference across specifications. Healthcare utilization indicators, including prior missed appointments, were not stable predictors. Model explanatory power was modest (Nagelkerke R²≈0.02–0.12). Conclusions: Preferences for SMS reminder message design among healthcare personnel are heterogeneous and only partially explained by observable structural characteristics. These findings suggest that SMS reminder systems should be conceptualized not merely as behavioral tools but as implementation-embedded communication processes within healthcare delivery systems. Optimizing SMS reminder strategies may therefore require context-sensitive message framing and workflow-compatible communication design in high-volume outpatient environments. Clinical Trial: not Applicable
Background: Conventional heart failure (HF) management is challenged by high loss to follow-up, fragmented care, and insufficient multidisciplinary collaboration (MDT), contributing to a 30% readmissi...
Background: Conventional heart failure (HF) management is challenged by high loss to follow-up, fragmented care, and insufficient multidisciplinary collaboration (MDT), contributing to a 30% readmission rate during the vulnerable post-discharge period. While the integration of remote monitoring and telehealth signals a paradigm shift towards proactive intervention, the effectiveness of a nurse-led, mHealth-based multidisciplinary model in this critical phase requires further validation. Objective: This randomized controlled trial evaluated a nurse-led, app-based multidisciplinary telemanagement program for improving self-care, symptoms, and clinical outcomes in vulnerable-phase HF patients. Methods: A single-blind, randomized controlled trial was conducted. 100 heart failure patients (left ventricular ejection fraction ≤50%) from a tertiary hospital in Beijing were randomly assigned to either an intervention group (n=50) or a control group (n=50). The intervention group received a 3-month, nurse-led, multidisciplinary telemanagement program via a cardiovascular health management APP. This program included structured education, personalized care plans (medication, self-monitoring, follow-up), automated reminders, and proactive monitoring. A core component was the nurse-coordinated multidisciplinary case discussion (involving doctors, pharmacists, and nurses) triggered by abnormal patient data. The control group received routine heart failure outpatient follow-up. The Self-Care of Heart Failure Index (SCHFI), the Memorial Symptom Assessment Scale-Heart Failure (MSAS-HF), B-type natriuretic peptide (BNP) levels, and NYHA functional class were assessed at baseline and 3 months. Results: After the 3-month intervention, the intervention group demonstrated significantly greater improvements compared to the control group in the SCHFI total score and its three subscales (self-care maintenance, management, and confidence), the MSAS-HF total score and its subscales (physical, psychological, and heart failure-specific symptoms), and BNP levels (t=2.302 to 3.953, -2.204 to -2.841, Z=-3.354, P < 0.05). Moreover, a significantly higher proportion of patients in the intervention group achieved NYHA class I (84.0% vs. 66.0%; χ²=4.320, P < 0.05). Conclusions: This nurse-led, mHealth-facilitated multidisciplinary telemanagement program led to significant improvements in self-care, symptom burden, NYHA functional class, and BNP among patients with heart failure during the vulnerable post-discharge period. By demonstrating these benefits, the model effectively overcomes critical limitations inherent in traditional post-discharge management approaches.
Background: Automated systems for detecting adverse drug reactions (ADRs) are increasingly common and carry high expectations from policymakers, researchers, healthcare professionals, and patients, ye...
Background: Automated systems for detecting adverse drug reactions (ADRs) are increasingly common and carry high expectations from policymakers, researchers, healthcare professionals, and patients, yet evidence of their effectiveness and safety remains limited Objective: The aim of this systematic review was to identify the ethical, legal, organizational, social, and environmental implications of these systems. Methods: We conducted a systematic using the VALIDATE framework, we conducted a three-step approach: (1) defining scope through literature review and stakeholder consultation; (2) systematic review; (3) environmental inquiries. Results: Stakeholders prioritized research on feasibility, barriers, facilitators, alarm management, staged implementation, confidentiality, cybersecurity, and bias detection. The systematic review of ten studies revealed that leveraging new data sources and developing privacy-protection technologies is essential for upholding ethical and legal standards. Cybersecurity risks could expose patient information to unauthorized parties, while biases in training datasets can compromise fairness. Integrating ADR detection into clinical workflows and medication management systems can improve resource optimization and reporting rates. Establishing a positive reporting culture, supported by education and training for healthcare teams, is crucial to enhance ADR reporting. Conclusions: Careful planning is critical when implementing an early ADR detection system. Incorporating co-design methodologies can help align these automated systems with stakeholder needs and improve medication safety. Clinical Trial: Not requiered
Background: Bangladeshi adolescents, who constitute a fifth of the country's population, experience barriers in accessing sexual and reproductive health (SRH) information. Previous studies have shown...
Background: Bangladeshi adolescents, who constitute a fifth of the country's population, experience barriers in accessing sexual and reproductive health (SRH) information. Previous studies have shown that mobile health (mHealth) interventions provide adolescents with timely access to evidence-based curricula, gamified, and interactive content, sessions, and information. The widespread adoption of mHealth technologies among adolescents and their willingness to embrace emerging technologies are encouraging specialists to employ mHealth approaches to share health information. Despite the high mobile phone usage among adolescents in Bangladesh, there are a few mHealth interventions specifically targeting their SRH needs. Objective: We aimed to assess changes in SRH knowledge and awareness among adolescents in Bangladesh following exposure to "Mukhorito", an interactive mobile app-based intervention. Methods: This pilot study employing a pre-post non-randomized experimental approach was conducted in three selected secondary schools in Feni, Bangladesh, from June 2023 to March 2024. 46 students from class 9 across the three schools were recruited, with a minimum of 10 per school. Bivariate analyses were performed to assess the association between SRH knowledge and awareness scores with other covariates. Significantly associated covariates for both scores were used in building the adjusted linear regression models. Results: The adjusted models indicated a significant improvement in the end-line group compared with the baseline group for both knowledge (1.2 units; 95% CI: 0.8-1.6 units) and awareness scores (1.0 units; 95% CI: 0.3-1.5 units), indicating a high level of intervention effect. Conclusions: These findings demonstrate the potential of mobile app-based innovations to improve adolescent SRH education within a national program in resource-constrained settings, specially where conventional methods may be less effective.
Background: The integration of artificial intelligence (AI) into intraoperative surgical imaging represents an emerging frontier in digital health. Despite advances in preoperative computed tomography...
Background: The integration of artificial intelligence (AI) into intraoperative surgical imaging represents an emerging frontier in digital health. Despite advances in preoperative computed tomography (CT)–based surgical planning, real-time translation of imaging data into actionable intraoperative guidance remains limited by CT-to-body divergence—a fundamental information gap between preoperative digital models and the dynamic surgical field. This divergence, driven by lung deflation under anesthesia and positional changes, represents a critical digital-to-physical registration challenge that current preoperative imaging workflows fail to address in real time. This study evaluated the performance and safety of the LungVision system, a portable AI-driven digital platform that integrates preoperative CT data with real-time fluoroscopic image fusion, for intraoperative tumor localization during thoracoscopic lung resection. Objective: This study aimed to evaluate the clinical feasibility, localization accuracy, and safety of the LungVision system—an AI-augmented fluoroscopic navigation platform—for real-time intraoperative localization of small pulmonary nodules during thoracoscopic surgery. Methods: A prospective single-center study enrolled fourteen patients with pulmonary nodules requiring localization prior to thoracoscopic resection between March and September 2024. The platform comprises a passive radiopaque positioning board, an AI-powered computing unit for real-time image processing, and a tablet-based interface for procedural planning and augmented visualization. All patients received dual localization with either preoperative CT-guided dye injection or Archimedes virtual bronchoscopic navigation, followed by intraoperative localization with the LungVision system and video-assisted thoracoscopic surgery. Demographic data, lesion characteristics, procedural performance, and procedure-related complications were collected. Results: The mean patient age was 57.2 years, and 92.9% were non-smokers. Most nodules were peripherally located (85.7%), with a mean diameter of 9.3 ± 5.3 mm and a mean CT attenuation of −320.1 Hounsfield units. LungVision successfully localized all target lesions intraoperatively, with a mean navigation time of 38.6 minutes. Complete resection was achieved in all cases, and 71.4% of nodules were pathologically malignant. No intraoperative or localization-related complications were observed. The system was integrated into the existing operating room without additional infrastructure modifications. Conclusions: The LungVision system demonstrated high accuracy and safety for intraoperative localization of small, hypodense pulmonary nodules. By minimizing CT-to-body divergence and integrating seamlessly into existing bronchoscopic and surgical workflows, this AI-driven platform represents a scalable and infrastructure-light alternative to conventional localization strategies, warranting further evaluation for broader clinical implementation.
Background: Inequitable and time-consuming shift scheduling contributes to nurse burnout, dissatisfaction, and turnover. In Taiwan, annual nurse turnover exceeds 11%, and rigid 3-shift systems combine...
Background: Inequitable and time-consuming shift scheduling contributes to nurse burnout, dissatisfaction, and turnover. In Taiwan, annual nurse turnover exceeds 11%, and rigid 3-shift systems combined with perceived unfairness in workload distribution are frequently cited concerns. Although AI scheduling tools exist, most lack transparency and do not adequately address nurses’ concerns about fairness and trust, limiting their adoption in practice. Objective: This study aimed to develop and evaluate a transparent, nurse-centered scheduling decision support system designed to reduce administrative burden, improve workload equity, and enhance staff acceptance in routine clinical settings. Methods: We conducted a pragmatic before-and-after implementation study at a 677-bed teaching hospital in Taiwan, involving 8 nursing departments and 156 nurses. A 6-month manual scheduling period was compared with a 6-month period using the new AI scheduling system. The system supported nurse managers by providing predictive workload insights, transparent explanations for scheduling decisions, and real-time equity monitoring. Outcomes included scheduling time, scheduling errors, workload variation, preference satisfaction, and user acceptance. Statistical analyses included linear mixed-effects and generalized estimating models. Results: Implementation reduced monthly scheduling time by 81.2% (32.0±8.0 to 6.0±2.0 hours; p<.001) and decreased scheduling errors by 73.8% (18.3% to 4.8%; p<.001). Nurse satisfaction increased significantly (3.2±0.8 to 4.4±0.6; p<.001), and routine adoption reached 94% by Month 3. Workload distribution became substantially more equitable, with reduced variation in shift allocation and elimination of experience-related disparities. Preference satisfaction was evenly distributed across staff levels. Greater engagement with schedule explanations was associated with higher satisfaction (r=0.456; p<.001). Conclusions: A transparent and fairness-oriented scheduling system can meaningfully reduce managerial workload, enhance perceived equity, and improve nurse acceptance in real-world practice. These findings suggest that explainable AI tools may support nurse well-being and promote more sustainable workforce management in hospital settings.
Background: Current AI interventions in mental health positions LLMs to act as therapists, raising concerns regarding simulated emotional bond and clinical safety. These systems risk patients becoming...
Background: Current AI interventions in mental health positions LLMs to act as therapists, raising concerns regarding simulated emotional bond and clinical safety. These systems risk patients becoming dependent on the tool, instead of fostering their own therapeutic skills for long-term recovery. Objective: This paper explores design considerations for adapting commonly-used LLMs (e.g., Gemini, ChatGPT, Llama) for clinical use, using them as a skill-building tool rather than a replacement for therapists. Methods: Guided by the educational theories, we developed a dual-persona chatbot. The first persona is a distressed character with cognitive distortion; the second persona is a facilitator that provides the user with scaffolding and instructions to navigate the interaction safely and successfully. Users are tasked to “help” the first persona, with the aid of the second persona, by identifying and restructuring their cognitive distortions. Through a process involving initial testing, establishing personas, and ensuring fidelity/safety, we developed three versions of the system. Four raters with varying clinical expertise assessed simulated interactions across four domains: Character Fidelity, Effective Facilitation, Boundary Management, and Overall Utility. Results: Inter-rater reliability among the raters was high (ICC = 0.76). The final version of the system was rated as effective in terms of character fidelity, learning facilitation, and clinical boundaries. The largest improvement across versions was in the construction of an effective and safe learning environment (F2,61 = 42.11, P <.001 for instruction clarity, F2,32 = 12.44, P <.001 for handling clinical risk), while character fidelity was rated highly across versions with little variation. The raters agreed that the tool is helpful for users to consolidate the skill of cognitive restructuring. Conclusions: By shifting the AI’s role from a source of emotional support to a subject for practice, this system encourages the user to engage in the practice to “be their own therapist”. Our findings provide a generalizable roadmap for integrating commercial AI into clinical workflows as a secure, skill-based supplement to human-led therapy.
Verbal feedback delivered by attending surgeons in the operating room plays a critical formative role in resident trainee skill acquisition. Yet, assessing the quality of trainer feedback and its effe...
Verbal feedback delivered by attending surgeons in the operating room plays a critical formative role in resident trainee skill acquisition. Yet, assessing the quality of trainer feedback and its effectiveness in influencing trainee behavior during live surgery remains a challenge. Prior studies assessed feedback content relying on extensive manual annotation by expert human raters and focused on developing broad taxonomies that overlook the qualitative aspects of feedback delivery such as clarity or urgency. Limited existing automated methods, including keyword analysis and topic modeling, also fail to capture these nuanced aspects. We introduce a two-stage LLM-based framework that discovers interpretable feedback quality criteria grounded in the context of surgical training. Our method uses multi-agent prompting and surgical domain knowledge injection to discover a small set of human interpretable scoring criteria (e.g., Encouraging, Urgent, Clear). These criteria are then used to automatically score live surgical feedback via an LLM-as-a-judge approach. Evaluation on 4.2k trainer feedback instances demonstrates that our AI-discovered criteria outperform prior content-based frameworks in predicting feedback effectiveness, including observed trainee behavioral adjustments and trainer approval. This work advances scalable, human-aligned assessment of communication quality in the operating room and provides a foundation for improving surgical teaching practices.
Background: Mechanical chronic low back pain is a common musculoskeletal condition that significantly affects daily function, work productivity, and quality of life. Routine physical therapy is widely...
Background: Mechanical chronic low back pain is a common musculoskeletal condition that significantly affects daily function, work productivity, and quality of life. Routine physical therapy is widely used for its management; however, interest has grown in adjunct approaches such as breathing exercises due to their potential role in pain modulation, trunk stability, and functional improvement. Objective: This study aimed to compare the effects of routine physical therapy with and without breathing exercises on pain intensity, lumbar range of motion, functional disability, and muscle endurance in patients with mechanical chronic low back pain. Methods: A single-blinded randomized controlled trial was conducted on 132 patients with mechanical chronic low back pain, who were randomly allocated into two equal groups. Group A received routine physical therapy, while Group B received routine physical therapy combined with breathing exercises for four weeks (12 sessions). Outcomes including pain (VAS), lumbar range of motion, functional disability (Modified Oswestry Disability Index), muscle endurance, and FEV₁ were assessed at baseline and follow-up. Data were analyzed using SPSS version 25, applying non-parametric tests and linear mixed models, with statistical significance set at p < 0.05. Results: A total of 132 patients with mechanical chronic low back pain were analyzed. Compared with routine physical therapy alone, the addition of breathing exercises resulted in a greater reduction in pain over time (group × time effect: F = 50.6, p < 0.001). Patients receiving breathing exercises also showed significantly superior improvements in lumbar range of motion across flexion, extension, side flexion, and rotation (time and interaction effects: p < 0.001). Functional disability, assessed by the Modified Oswestry Disability Index, decreased more markedly in the breathing exercise group (mean reduction: 42.4 vs 28.3; F = 4.34, p = 0.005). In addition, trunk muscle endurance (anterior, posterior, and lateral plank tests) improved significantly more in patients receiving breathing exercises compared with routine therapy alone (interaction effects: F = 524–2138, p < 0.001). Conclusions: It is concluded that he addition of breathing exercises to routine physical therapy resulted in superior improvements in pain reduction, lumbar mobility, functional disability, trunk muscle endurance, and pulmonary function. Clinical Trial: IRCT Registration number = IRCT 20200901048579N1
Background: People living with advanced cancer experience more frequent and severe symptoms than people living with early-stage disease. Four common and distressing symptoms include sleep difficulties...
Background: People living with advanced cancer experience more frequent and severe symptoms than people living with early-stage disease. Four common and distressing symptoms include sleep difficulties, worry-anxiety, fatigue, and depression. Cognitive-behavioral therapy (CBT) and acceptance and commitment therapy (ACT) interventions are effective for managing these symptoms but are often too time-intensive for people with multiple appointments, limited energy, and competing priorities. Brief, mobile health (mHealth) interventions provide an accessible alternative, particularly for those in rural communities with limited access to palliative and/or psychosocial oncology services. Objective: Based on our successful in-person/DVD-based pilot trial of a four session, integrated CBT-ACT symptom management intervention for advanced cancer patients, Finding Our Center Under Stress (FOCUS), this study tests the feasibility and acceptability of a mHealth translation of this intervention. Methods: In this single-group, feasibility trial, 11 people with advanced cancer were recruited through hospital-based oncology clinics representing four cancer types (breast, melanoma, multiple myeloma, prostate). Patients completed sociodemographic questions, initial patient-reported outcomes including sleep (ISI), anxiety (GAD-7, PSWQ), fatigue (FSI), and depression (CES-D) and a 7-day sleep diary via the mobile app. They then completed four modules focused on the self-management of sleep difficulties, worry-anxiety, fatigue, and depression. To assess feasibility, we examined recruitment, retention, and module completion. At the end of six weeks, to assess acceptability, participants completed the Internet Evaluation and Utility Scale and some participants completed a qualitative interview assessing their experience with the FOCUS app. We present quantitative and qualitative results as well as lessons learned in designing the application for this patient population. Results: Sixty-five percent entered the trial (N =11) and seventy percent completed more than half of the app. These participants gave strong ratings for FOCUS ease of use (3/4), convenience (3.7/4), utility (3.3/4), and ease of understanding (3.83/4). All participants (10/10) said they would recommend the app to other people with cancer and would return to the app with future problems. Participants’ favorite components were video recordings of other patients and the sleep and worry/uncertainty modules. Areas for improvement based on participant feedback included video quality for some components (i.e., lighting, sound), sleep diary ease of use, and a desire for professional guidance. Conclusions: The FOCUS intervention was successfully delivered via mobile technology and was feasible and acceptable per beta testing. The FOCUS mHealth app provides an evidence-based, accessible symptom management intervention for people with advanced cancer in rural communities. In accordance with participant feedback, for FOCUS 2.0 we will enhance video segments, incorporate a telehealth component to support app usage, and further develop the interactive and motivational features of the app. Future research will explore the effectiveness of this mHealth symptom management application via a randomized controlled trial.
Background: The rapid growth of digital technologies has generated large volumes of free-text data across healthcare, public health, and social research. These contain contextualised accounts of lived...
Background: The rapid growth of digital technologies has generated large volumes of free-text data across healthcare, public health, and social research. These contain contextualised accounts of lived experience that are often absent from quantitative measures. Despite their value, these data remain underused because qualitative analysis is traditionally designed for in-depth work on smaller numbers. Computational methods, including topic modelling and large language models, are increasingly promoted as efficient solutions. However, concerns persist regarding interpretability, bias, hallucinations, and loss of contextual depth. Critically, there is no established human-centred framework for evaluating the quality of machine-generated outputs for qualitative analysis. Objective: 1) To develop an AI evaluation framework for assessing machine-generated outputs, 2) Evaluate different AI approached to textual data analysis Methods: We developed and applied a human-centred evaluation framework, GRACE (Grounded Review and Assessment of Computational Evidence), to assess the quality of machine-generated textual outputs. GRACE was derived from established qualitative appraisal tools and operationalised four core indicators: interpretability, actionability, nuance, and redundancy, using structured scoring and reflexive consensus. We compared classic probabilistic topic modelling (LDA), a deep learning embedding-based approach (BERTopic), and three large language models (LLMs: LLaMA-3, Copilot, DeepSeek), used alone or in combination with prior structural topic modelling (STM). These were applied to the same corpus (n = 1,044 free-text responses). LLM prompting was iteratively refined, with a single-shot STM-based configuration selected for final evaluation due to reduced hallucinations. All outputs were analysed using Machine-Assisted Topic Analysis. A rapid manual thematic analysis of a 15% subsample (n = 152) served as a pragmatic comparator. Results: Model outputs were variable, with different AI methods producing different results from the same dataset. GRACE evaluation indicated that LDA achieved the highest overall mean score (2.6/5), followed by BERTopic and topic modelling plus Copilot (2.5), topic modelling plus LLaMA-3 (2.2), and topic modelling plus DeepSeek (1.9). LDA generated broader conceptual patterns requiring interpretive refinement; while BERTopic produced narrower, more descriptive clusters with thematic overlap. LLM-only outputs were very poor. The combination of topic modelling and LLMs performed slightly better: the outputs were well structured but often superficial and repetitive. Conclusions: Computational models produced different interpretations of the same dataset, and performance did not align with technical complexity. Large language models were not suitable for thematic analysis, especially when applied to raw data, generating generalised and sometimes inaccurate outputs. Classical probabilistic modelling, particularly topic modelling + qualitative human analysis using the Machine Assisted Topic Analysis (MATA) approach provided the highest quality results. We argue that the key issue is not whether a model “works,” but whether it support meaningful, contextually grounded results. GRACE offers a simple, human-centred framework to support this assessment and build evidence base for analysis of free-text data that is useful and nuanced.
Background: Telepalliative care, the use of telehealth in palliative care, has emerged as a strategy to improve access to specialist palliative services amid growing demand, workforce shortages, and i...
Background: Telepalliative care, the use of telehealth in palliative care, has emerged as a strategy to improve access to specialist palliative services amid growing demand, workforce shortages, and increasing digitalization of health care. Although telepalliative care has demonstrated positive outcomes for patients, families, and clinicians, its integration into standard services remains inconsistent. Existing initiatives are often operationally focused and rarely grounded in programme theory or developed collaboratively with key stakeholders, limiting sustainability and contextual alignment, particularly in Nordic health systems that emphasize home-based palliative care. Objective: This study aimed to develop a family focused model of telepalliative care for clinical practice through active involvement of key stakeholders. Methods: A co-design qualitative study grounded in interpretive description was conducted. The development followed the British Medical Research Council’s guidance for the development and evaluation of complex interventions and represents the development phase. Key stakeholders including patients, family representatives, specialized palliative care team members, community care nurses, general practitioners, voluntary representatives, IT consultants, managers, and researchers, were purposively recruited. Data were generated through four scientific workshops across two Danish sites, supplemented by participant observations of video consultations and a short questionnaire inspired by the Normalisation Measure Development (NoMAD) questionnaire. Data were analyzed using abductive thematic analysis, with qualitative and quantitative findings converged and iteratively refined through stakeholder consensus. A programme theory and logic model guided development. Results: Eighteen stakeholders participated in the workshops, with additional input from clinicians through observations (6 consultations involving 22 participants) and questionnaires (n=10). Findings highlighted both alignment and tension between the proposed model and current clinical practice, particularly regarding when and for whom telepalliative care should be used, clinician digital competencies, and family involvement. These, and insights from previous studies, informed the primary output of the study which is Pallvi – Family Focused Telepalliative Care, a comprehensive, theory-informed model comprising of a structured consultation guide and two co-designed quick guides; one for health care professionals and one for patients and families. Pallvi integrates family focused care, shared decision-making, advance care planning, and the Calgary-Cambridge Communication Guide, operationalized across seven consultation phases. Conclusions: Through systematic stakeholder involvement and theory-driven development, this study produced a contextually and culturally aligned family focused model of telepalliative care. Pallvi addresses identified gaps in telepalliative care research by providing a structured, practical guide designed to support communication, family involvement, and cross-sectoral collaboration. Future research will focus on feasibility and implementation testing to assess acceptability, fidelity, and sustainability in clinical practice and implementation.
Background: Physical activity (PA), sedentary behaviour (SB), and sleep play a key role in the health and development of young people (Carson et al., 2016; Chaput et al., 2016; Poitras et al., 2016)....
Background: Physical activity (PA), sedentary behaviour (SB), and sleep play a key role in the health and development of young people (Carson et al., 2016; Chaput et al., 2016; Poitras et al., 2016). This has led to the development of guidelines on PA, SB and sleep for children and young people aged 5-17 years (Health, 2017; Tremblay et al., 2016). Young people aged 12-17 years should engage in at least 60 minutes or more of moderate-to-vigorous physical activity (MVPA) per day mainly involving aerobic activity and several hours of a variety of light movement activities. Whilst the WHO guidelines only recommend limiting sitting time (specifically recreational screen time), guidelines of some countries—e.g., Canada, Australia, and New Zealand—specify limiting recreational screen time to no more than two-hours per day and breaking up long periods of sitting as often as possible (Carson et al., 2017; Chaput et al., 2014; Tremblay et al., 2016). Young people are also recommended to achieve 9-11 hours of uninterrupted sleep per night (for 12-13-year-olds) or 8-10 hours of uninterrupted sleep (for 14-17-year-olds) and consistent bed and wake times (Bruni et al., 2025). A growing body of evidence shows that PA, SB, and sleep interact with one another within any 24-hour period (Chaput et al., 2020). Therefore, it is important to acknowledge that each of the three elements (i.e., PA, SB and sleep) have the potential to impact on one another. For example, poor sleep that is unaddressed will make PA unlikely and is likely to raise levels of SB (Tremblay et al., 2016). Several studies have indicated that young people who meet at least two of the three 24-hr MBs guidelines had better cognitive function, improved mental health and lower all-cause mortality when compared with those who were not able to meet any of the 24-hr MBs (Hao et al., 2024; Huang et al., 2024; Zhang et al., 2023).
In studies that have involved children and young people with a variety of conditions (i.e., Autistic Spectrum Disorder (ASD), Attention-Deficit-Hyperactivity-Disorder (ADHD), disabilities etc.), similar findings have been reported, identifying low prevalence in adhering to the 24-hr MBs. For example, Totsika et al. (2022) identified that people with intellectual disabilities tended to spend large volumes of time in SB and tended to not engage in sufficient PA (Totsika et al., 2022). Li and colleagues (2022) in conducting a seven country observational study found that only 2% of children and young people with ASD adhered to all 24-hr MBs guidelines with 22% meeting none of the guidelines (Li et al., 2022). Similar to previous findings, a recent systematic review and meta-analysis conducted by Huang et al. (2024) identified that when comparing participants with disabilities who meet at least two of the three 24-hr MB guidelines, individuals with disabilities who do not meet any guidelines had higher anxiety, depression and other mental health symptoms (Huang et al., 2024). In addition, other studies have shown that individuals with ASD who meet all three 24-hr MBs have a significantly higher quality of life than those not meeting any (Kong et al., 2023; Li et al., 2022). However, there is a specific challenge in determining population definitions. Current literature uses intermittent nomenclature including intellectual disabilities, learning disabilities, learning difficulties, neurodevelopmental disorders, physical disabilities, and cognitive impairments. Each of the characteristics of different populations thus make comparing studies more problematic. Whilst the health-related outcomes associated with meeting all 24-hr MBs are well documented (Kracht et al., 2024) less is known about young people with learning disabilities. For example, in the review by Huang et al. (2024) only one study included children and young people with learning disabilities. Previous studies have tended to focus on disabilities in general and other conditions such as ASD (Healy et al., 2019), ADHD (Cortese et al., 2009), Cerebral Palsy (Abid et al., 2023) etc. Yet these conditions do not always share comorbidity with learning disabilities. This has led to young people with learning disabilities being underserved within the 24-hr MBs research field.
With regards to promoting 24-hr MB adherence, some studies suggest that intervention programs should emphasize engagement of physical activity (PA), reduction of SB and increase in sleep among young people with learning disabilities (Ginis et al., 2021; Taylor et al., 2023). Young people with a range of disabilities and conditions usually face barriers that prevent them from engaging in PA, such as interpersonal and environmental factors (Abid et al., 2023; Lobenius-Palmér et al., 2018). In addition, it is well known that sleep disorders are common in adolescents with learning disabilities which in turn may exacerbate behavioural problems and hinder overall care (Bruni et al., 2025; Stores, 2014). Increased difficulties are also specifically linked to the nature of the learning disabilities that the young person faces whether it be physical, sensory, intellectual or neurodevelopmental (Hao et al., 2024). Disability does not encompass only an individual’s impairment, but also barriers and aspects of discrimination that they may experience in society as a result (Grue, 2016). Young people with learning disabilities might therefore demonstrate lower adherence to the 24-hr MBs and be at greater health risk (Healy & Garcia, 2019; Lu & Zhao, 2023; Taylor et al., 2023).
Unfortunately, within the UK there is no quantitative or qualitative data collected with young people with learning disabilities and their adherence to the 24-hr MB guidelines. Previous studies have tended to focus more on specific conditions (i.e., ASD, ADHD, etc.) which may or may not include learning disabilities. Therefore, involving young people with learning disabilities in this present study could help inform the development of interventions to promote adherence to the guidelines in those with learning disabilities. However, there is limited intervention research to date and a lack of interventions available specifically for the learning disabilities population. The UK MRC Guidance on the development of complex interventions recommends that a number of research questions are addressed as part of the early stage of intervention development (Anderson, 2008; Levack et al., 2024). This study aims to fill these gaps by combining quantitative and qualitative methods. The insights gathered will hopefully provide a better understanding of the barriers faced by young people in trying to adhere to 24-hr MBs. This increased understanding may lead to targeted interventions to improve the adherence of 24-hr MBs in young people with learning disabilities.
For the present study, learning disabilities is defined as people who meet three specific criteria: (1) global intellectual impairment (intelligence quotient less than 70), (2) the need for support and help to fulfil ordinary daily activities, and (3) the onset before 18 years of age. learning disabilities may have a recognised cause (e.g. Down Syndrome, Williams Syndorme), but often the cause is not known. Young people with learning disabilities often have other physical and mental health conditions, disabilities, and/or impairments as well as having learning disabilities (Gandra et al., 2025). Objective: This study aims to provide pilot data to capture 24-hr MBs of young people with learning disabilities in Scotland and their perceptions thereof. Methods: Design and recruitment
This study will use a mixed-methods approach collecting quantitative and qualitative data. Families will be included if they have a child currently attending an Additional Support Needs (ASN) secondary school in the Greater Glasgow and Lanarkshire areas. We will recruit 60 young people (aged 11-17 years) with learning disabilities and obtain consent from them and their parents. In the first instance, four ASN schools will be approached and informed about the research project objectives and relevance. Participant Information Sheets (PIS) and Parent Information Sheets (PaIS) will be provided for each young person and their family along with consent forms. The research associate will attend each ASN school and provide young people with an opportunity to ask questions. The researcher will also attend a parent’s night/event at the ASN school to allow parents/guardians an opportunity to ask questions about the nature and purpose of the study. This will be arranged shortly after the consent forms, PIS, and PaIS have been given out to allow participants a chance to read through the information ahead of the parent night/event. To ensure we are meeting the needs of the young person, we will consult the schools in terms of which young person should be approached for the study.
Ethics Approval
All procedures have been approved by the University of Strathclyde Ethics Committee (UEC25/10), and informed consent will be obtained from all participants.
Measures
Actigraph (GT3x) accelerometers will be used to measure physical activity, sedentary behaviour, and sleep. The Actigraph is a widely used activity monitor which has shown good reliability and validity in this age group and population (McGarty et al., 2016; Rodrigues et al., 2025; Xu & Wang, 2023). Young people will be asked to wear the Actigraph GT3x for a period of 7 consecutive days including weekends. The device will be worn on the right hip, set to a frequency of 100Hz, and 5 second epochs. Parents/guardians will be asked to keep an activity diary outlining when their child has had to remove the device (i.e., bathing, showering, swimming). Parents will also be asked to record their child’s recreational screen time, and sleep-wake times daily. When the device has been returned, the data will be analysed and a unique activity profile chart (Figure 1) will be sent out to each family providing an illustration of the 24-hr MBs (time spent active, time spent sleeping, time spent sedentary and on screens). This chart has been used successfully in a previous study (Dalziell & Janssen, 2023). Adjustments to the chart will be made relative to the age of the participant (i.e., taking into consideration the different recommendations for sleep duration at certain ages), and each participant will receive their chart prior to the interview taking place.
Figure 1: Example of activity Profile Chart (Dalziell & Janssen, 2023)
Qualitative data will be collected from 10-15 participants in the form of a semi-structured interview. The interviews will take place shortly after providing the participants with their unique activity profile. The unique activity profile will play a central role in discussing the 24-hr MBs aiming to engage the young person during the interview. The interviews will take place within a quiet room in the school that the participant attends. These interviews will allow further insight into 24-hr MBs and enable the researchers to examine the psychological, social, environmental, and wider contextual factors that influence the 24-hr MBs of young people with learning disabilities, including salient barriers and facilitators to engaging in 24-hr MBs. The interviews will draw on a range of psychological and behaviour change theories including COM-B (McDermott et al., 2022), TDF (Caltabiano et al., 2024), Behaviour Change Wheel (Maenhout et al., 2024; Michie et al., 2011) and Self-Determination Theory (Lindsay & Varahra, 2022). Data from the interviews will also explore methodological issues for assessing 24-hr MBs, self-report/parent reliability and validity. Participants will be selected for interview based on their ability to have a conversation and answer simple questions with the research associate. The interview will cover areas regarding the participants understanding of the 24-hr MBs, as well as their views with regards the barriers and facilitators they experience in trying to adhere to the guidelines. Participants may be invited to identify a Support for Learning Worker (SLW) who supports them in school to accompany them during the interview. The interview data will be digitally recorded, analysed systematically in line with the principles of thematic analysis (Braun & Clarke, 2006).
Study Procedure
Following the consent process, and prior to the young person wearing the Actigraph GT3x, parents/guardians will be asked to complete a questionnaire to provide screen time usage, perceived levels of physical activity, and typical sleep. The researcher will demonstrate how the Actigraph GT3x should be worn along with an explanation of how to fill in the self-report diary that parents/guardians will be asked to complete including sleep-wake times, screen time usage, and when the Actigraph was removed (i.e., for bathing, swimming). Participants providing consent will then be provided with the Actigraph GT3x. Young people will be asked to wear the Actigraph GT3x for seven consecutive days for the whole 24-hours each day, in line with standardised protocols used in previous studies (McGarty et al., 2016; Rodrigues et al., 2025). During the 7 days, participants will not receive any information about the 24-hr MBs but will be encouraged to maintain their usual daily activities. On completion of the seven consecutive days, the activity monitor and self-report activity diary will be returned to the school for the research associate to collect. Upon completion of the tasks within this study, participants will be thanked and sent a £20.00 gift voucher as a token of appreciation. Results: The quantitative data collected from the Actigraph GT3x will be presented through descriptive analysis (Evenson & Wen, 2015). The Actigraph provides continuous counts for light, moderate and vigorous activity movements providing a measure at each activity intensity. The advantage of using Actigraph accelerometers is that we can quantify time spent in PA of different intensities. The output in this study will therefore relate to three standard thresholds for activity; light, moderate, and vigorous activity. Total time spent in sedentary (ROC-AUC=0.80), light (ROC-AUC=0.66), moderate and vigorous (ROC-AUC=0.70) intensity PA will be calculated using the Actilife Software V.7.0 along with the Evenson cut-points (Evenson & Wen, 2015). The Evenson cut-points have been selected as they are known to provide excellent discrimination across different intensities of PA (Evenson et al., 2008). To estimate sleep outcomes, algorithms will be used to determine wake and sleep times based on the assumption that recorded movement is indicative of wakefulness and therefore the absence of movement is indicative of sleep. Total sleep time (TST) will be defined as the number of minutes from the onset of sleep to the offset of sleep subtracting the number of minutes awake. This metric has been used successfully in previous studies (Meredith-Jones et al., 2024; Smith et al., 2020). Although a lack of movement can often be recorded during times of wakefulness, and as such mistaken as sleep, the data obtained from the Actigraph GT3x will be coordinated alongside the parent sleep diaries to account for this.
Qualitative interview data will be analysed thematically (Braun & Clarke, 2006). Inductive analysis will provide an opportunity to determine how the young people feel about the 24-hr MBs and may provide key insights into the public messaging around the importance of the 24-hr MBs with the learning disabilities population. Deductive analysis will allow us to gain a deeper understanding of the barriers and facilitators experienced by the young people when trying to adhere to the guidelines. This would allow direct links to be made to behaviour change theories and would provide key foundations for future interventions to be designed with the purpose of better supporting young people with learning disabilities to adhere to the 24-hr MBs. Conclusions: It is anticipated that this study will help guide future studies and help improve the protocols that are to be adopted with this specialist population. Fundamentally the aim is to gather data to inform the design, implementation and analysis of interventions that support young people with learning disabilities to adhere to the 24-hr MBs. Clinical Trial: Not applicable
Background: Acute respiratory infections continue pose a transmission risk in outpatient care, making early identification and separation of potentially infectious patients quintessential. Digital, co...
Background: Acute respiratory infections continue pose a transmission risk in outpatient care, making early identification and separation of potentially infectious patients quintessential. Digital, contactless screening tools may aid with the separation of potentially infectious patients, however effectiveness can depend on users acceptance and engagement. Objective: To assess user acceptance of patients using a video-based digital Screening and Registration Terminal (SRT) to improve infection prevention at an outpatient clinic in Berlin, Germany. Methods: A cross-sectional survey was conducted among patients with acute care needs using the SRT between
October 4 to November 22, 2023. We describe summarized user acceptance factors including ease of use, intention
to use, perceived usefulness, attitude, privacy, audio-visual communication, and technical sensors overall and by sex,
age, education. Results: Of the 56 participants, 55% (29/56) were 20-39 years old, and 63% (35/56) had received higher education. Among respondents with available answers 55% (30/55) reported that the SRT was easy to use and 40% (22/55) found it useful. Intention to use was expressed by 56% (31/55) of respondents and 57% (31/54) reported a positive attitude towards technology. Privacy concerns were expressed by 24% (13/54) of participants, while 24% (13/54) did not indicate any and 7% (4/54) reported difficulties with audio-visual communication. Conclusions: More than half of the patients using the SRT positively reported on most user acceptance factors. This indicates that the SRT was generally well accepted, particularly with regard to ease of use and perceived usefulness. Privacy concerns and audio-visual communication issues were reported which underlines the importance of integrating user acceptance research when introducing new tools to address barriers to user acceptance early on.
Background: The escalating medical burden associated with stroke poses a substantial challenge, characterized by a skewed distribution wherein a minority of high-cost patients accounts for a dispropor...
Background: The escalating medical burden associated with stroke poses a substantial challenge, characterized by a skewed distribution wherein a minority of high-cost patients accounts for a disproportionate share of healthcare expenditures. Consequently, the timely and accurate identification of this cohort is paramount for optimizing the quality of care and mitigating unnecessary resource utilization. Objective: This study aims to construct a comorbidity network for stroke patients using hospital discharge data, extract topological features characterizing disease interactions, and integrate these features with machine learning algorithms to establish a robust and clinically interpretable framework for the accurate identification of high-cost stroke patients. Methods: We conducted a retrospective study using hospital discharge data from 10,301 stroke inpatients at a tertiary hospital in Northeast China between 2021 and 2023. Data from the 2021–2022 period were used to construct two specific networks: the Phenotypic Comorbidity Network (PCN) and the Distance-based Disease Cost Network (DDCN). From these networks, topological features were extracted to capture latent associations between comorbidities and high costs. The 2023 dataset was subsequently partitioned into training and testing sets to develop five machine learning models, including Logistic Regression (LR), Support Vector Machine (SVM), Neural Network (NN), Random Forest (RF), and XGBoost, for the identification of high-cost stroke inpatients. Furthermore, the SHAP method was applied to elucidate both the global and local contributions of the model features. Results: The integration of network features significantly improved model performance, with XGBoost exhibiting superior predictive capability (AUC = 0.911). Global feature importance analysis indicated that network features accounted for the majority of the total contribution (52.8%). Specifically, Shortest Distance (SD), length of stay, Normalized High-Cost Propensity (NHCP), age, and insurance type were identified as the top five predictors of high-cost risk. Moreover, SHAP interaction analysis revealed the phasic heterogeneity inherent in patient resource utilization. Conclusions: Our comprehensive framework, integrating comorbidity network analysis with machine learning algorithms, significantly enhances the identification of high-cost stroke inpatients. These findings highlight the framework's potential utility in optimizing healthcare resource allocation and enabling proactive cost containment strategies. Clinical Trial: Not applicable
Background: Retrieval-augmented generation (RAG) systems increasingly support clinical decision-making by grounding large language model outputs in verifiable evidence. The retrieval component is foun...
Background: Retrieval-augmented generation (RAG) systems increasingly support clinical decision-making by grounding large language model outputs in verifiable evidence. The retrieval component is foundational: if the correct document is not retrieved, downstream generation cannot recover it. Despite this, embedding model selection for clinical RAG remains guided by general-domain benchmarks with limited clinical coverage. Given the heterogeneity of clinical documentation across institutions, specialties, and electronic health record systems, it is unclear whether general-domain model rankings generalize to clinical retrieval tasks. Objective: This study evaluated whether clinical context variables—corpus type (encompassing differences in document length, medical specialty, and structural characteristics) and query format—have effects on retrieval performance comparable to or exceeding those of embedding model choice. Methods: Ten embedding models were benchmarked against BM25 on three clinical corpora (MTSamples medical transcriptions, n=500; PMC-Patients case reports, n=500; Mistral-7B-generated synthetic clinical notes, n=500). Twelve embedding configurations were evaluated across 3 corpora × 2 query formats (keyword vs natural language) × 4 chunking strategies, yielding 294 experimental conditions. Primary metrics included MRR@10, P@1, Recall@10/20/50/100, and NDCG@10, with bootstrap confidence intervals. Relative factor contributions were quantified using factorial ANOVA with η² effect sizes, including all two-way interactions. Results: In a factorial ANOVA across 288 balanced embedding conditions, embedding model choice explained 40.8% of variance in MRR@10 (η²=0.408), corpus type 24.6%, and query format 19.2%. Chunking strategy explained minimal variance (η²=0.002). The model × query format interaction (η²=0.029, p<.001) indicated differential query sensitivity across models. A model × corpus interaction (η²=0.040, p<.001) indicated that model rankings shifted meaningfully across corpora. Combined context variables (corpus + query format + context interactions) explained 49.0% of total variance, compared with 47.6% for model-related effects. Model rankings were moderately unstable under keyword queries (Kendall τ=0.59, 95% CI [0.21, 0.89]) but highly stable under natural language queries (τ=0.82–0.87). BM25 achieved near-perfect retrieval on PMC-Patients in this known-item setting (MRR@10=0.999). Domain-specific models (BioBERT, ClinicalBERT) performed worse than general-purpose embeddings despite biomedical pretraining, with mean pairwise cosine similarity exceeding 0.90, indicating that all embeddings clustered in a narrow cone. A validation experiment using reduced-lexical-dependence queries—generated from GPT-4o-extracted metadata rather than document text—supported rank stability across query derivations (Kendall τ=0.59–0.90, mean 0.76, all p<.005) and showed that BM25 remained strong on structured case reports (MRR@10=0.980). Conclusions: Clinical context variables explained as much variance in retrieval performance as embedding model choice, and model × corpus interactions showed that rankings are not portable across documentation types. Validation with reduced-lexical-dependence queries supported rank stability across query derivations. These results argue against reliance on general-domain leaderboards for clinical RAG deployment and support mandatory local validation as a methodological requirement.
Background: Health information technology (HIT), while designed to improve practice efficiency and patient care, can contribute to physician burnout and impact health care delivery. Objective: This st...
Background: Health information technology (HIT), while designed to improve practice efficiency and patient care, can contribute to physician burnout and impact health care delivery. Objective: This study examines the contribution of specific HIT functions to alleviate and predict physician burnout within the differing health landscapes of Ontario (ON) and Nova Scotia (NS), with particular attention to administrative burdens, interoperability and system integration and clinician perceptions of HIT. Methods: We designed a mixed methods study and developed a questionnaire to identify HIT characteristics that contribute to, alleviate, and predict clinician burnout in Ontario (ON) and Nova Scotia (NS). The survey was distributed February to April of 2024. Subgroup differences in HIT-related burnout were analysed and qualitative coding of the open-text questions generated critical insights. Results: Despite differences in the samples, the HIT uses, and care models, common experiences were apparent. “Managing communications related to patient care” and “inputting data into your EMR” were among both samples’ top three administrative burdens. While “logging in and out of technology platforms” ranked higher in NS, the related theme of integration and interoperability was prominent in both samples. Perceptions of HIT were associated with self-reported burnout score. Conclusions: While physicians see the advantages of HIT in patient care, overwhelming documentation burdens and disjointedness across data platforms contribute to their burnout. Greater ongoing involvement by end users in the design, implementation and use, along with improved standardization and interoperability would reduce these burdens while maintaining the benefits of digital health systems.
Background: In the United States, many individuals lack adequate access to healthcare services due to a host of economic, logistical, and social barriers. Telehealth technologies and mobile health cli...
Background: In the United States, many individuals lack adequate access to healthcare services due to a host of economic, logistical, and social barriers. Telehealth technologies and mobile health clinics present the opportunity to close the “last mile” between patients and healthcare services. Objective: Our multidisciplinary team from healthcare and academia wanted to design a mobile health clinic with potential telehealth services, along with the supporting infrastructure as a first step towards developing such a program for our region. Methods: Our multidisciplinary team hosted a co-design session to collaboratively design and mock-up mobile health clinic services aimed at serving the needs of our community, with an emphasis on vulnerable populations within our region. Results: This session yielded insights into the necessity for flexible space, equipment, and staff, and how “high-tech” tools, like drones and robots, along with a fleet of small, medium, and large mobile health clinics, could be maximally positioned to traverse “the last mile” and provide equitable healthcare to our community. Conclusions: The use of mobile clinics to address last-mile challenges could have a transformative impact on community health, and co-design is a valuable tool to elucidate pragmatic opportunities to target first, and can aid in developing a broader roadmap to scale up strategically and sustainably.
Background: Background:
Chemotherapy-induced alopecia is among the most psychologically distressing adverse effects of systemic cancer therapy. Although scalp cooling is increasingly used to mitigate...
Background: Background:
Chemotherapy-induced alopecia is among the most psychologically distressing adverse effects of systemic cancer therapy. Although scalp cooling is increasingly used to mitigate hair loss, it is still largely perceived as a cosmetic intervention. Its broader psychological relevance and the biological basis of treatment success, particularly the preservation of follicular integrity under ongoing cytotoxic exposure remain insufficiently explored. Objective: Objective:
This study aimed to reconceptualize scalp cooling beyond visible hair preservation by examining its psychological impact in patients receiving highly alopecia-inducing chemotherapy, while integrating quantitative objective hair preservation metrics with structural and ultrastructural analyses of hair follicle damage to identify avenues for improving follicular integrity and scalp cooling efficiency. Methods: Methods:
82 patients undergoing highly alopecia-inducing chemotherapy consisting of sequential anthracycline-taxane regimen (four cycles of epirubicine and cyclophosphamide followed by 12 weekly paclitaxel applications) received standardized scalp cooling. Objective hair preservation was quantified using the Hair Mass Index (HMI) as a standardized and reproducible measure of hair retention. Structural and ultrastructural follicular integrity was assessed using light microscopy as well as scanning and transmission electron microscopy. Objective hair preservation metrics were analyzed in relation to patient-reported quality-of-life outcomes (EORTC-based measures), subjective treatment burden, and cognitive appraisal of the scalp cooling experience. Multivariable regression models were applied to identify determinants of post-therapeutic quality of life. Results: Results:
Visible chemotherapy-induced alopecia was successfully prevented in more than half of the treated patients. Scalp cooling resulted in substantial objective hair preservation as quantified by the Hair Mass Index; however, HMI values showed only a limited association with post-therapeutic quality-of-life outcomes. In contrast, cognitive appraisal of scalp cooling emerged as a central determinant of post-therapeutic quality of life, independent of the degree of objective hair retention. Structural and ultrastructural analyses demonstrated that preservation of follicular integrity was closely associated with successful macroscopic hair retention under ongoing cytotoxic exposure, supporting a biological basis for the clinical effectiveness of scalp cooling. Conclusions: Conclusions:
The clinical relevance of scalp cooling extends beyond objective and visible hair preservation and appears to reside predominantly in its psychological impact on patients undergoing highly alopecia-inducing chemotherapy. Importantly, the identification of structural and ultrastructural markers of follicular vulnerability provides a mechanistic foundation for the future optimization of scalp cooling approaches and for the development of adjunct follicle-directed protective strategies to enhance follicular integrity and support patient well-being during cytotoxic therapy.
An emerging systems-engineering framework, the Q‑OSI (Quality Open Systems Interoperability) Model reconceptualizes HEDIS and UDS “gaps in care” as layered failures across a quality performance...
An emerging systems-engineering framework, the Q‑OSI (Quality Open Systems Interoperability) Model reconceptualizes HEDIS and UDS “gaps in care” as layered failures across a quality performance stack rather than isolated clinical or documentation problems. Drawing an analogy to the OSI model in network communication, Q‑OSI defines seven interdependent layers—Compliance & Reporting, Measure Logic, Structured Data, Workflow Execution, Clinical Decision, Care Coordination, and Patient Activation—through which a quality “signal” must successfully transmit for a measure to close. In current practice, missed HEDIS and UDS targets are often attributed globally to “clinical performance” or “poor documentation,” obscuring where in the end‑to‑end pipeline failures actually occur and leading to diffuse, non-specific interventions. The Q‑OSI Model instead asserts that most gaps are interoperability issues between technical, workflow, and behavioral layers: for example, an A1C result documented as free text (Structured Data failure), a mammogram order never scheduled (Workflow Execution failure), or a patient never contacted for outreach (Care Coordination failure), even when the underlying clinical decision is appropriate. By providing a simple, memorable, seven-layer map, the framework enables quality and informatics teams to classify defects by layer, align interventions more precisely (eg, templates and coding at Layer 3, standing orders at Layer 4, SMS automation at Layer 6), and monitor whether remediation efforts are addressing the true bottleneck. For public health informatics, Q‑OSI offers a practical bridge between population health measurement, data standards, clinical operations, and patient-facing engagement, positioning quality improvement as an engineering discipline grounded in layered interoperability rather than a reactive cycle of measure chasing. This Viewpoint introduces the Q‑OSI Model, illustrates its use with common HEDIS scenarios, and outlines how it could inform maturity models, dashboard design, and implementation research in settings such as Federally Qualified Health Centers and safety-net systems.
Background: This randomized feasibility study addresses the safety and preliminary efficacy of a jump-based training program in older adults, a population in which high-impact exercises are historical...
Background: This randomized feasibility study addresses the safety and preliminary efficacy of a jump-based training program in older adults, a population in which high-impact exercises are historically underutilized. Our results demonstrate that jump-based training is feasible and safe for older women, with high adherence and no adverse events. Furthermore, the intervention showed a clinically relevant effect size (Cohen's d = 0.60) in improving functional mobility (Timed Up and Go Test - TUG), a strong predictor of fall risk. We believe these results are of great interest to both the academic community and those who apply exercise to this population, especially as they provide a promising basis for the inclusion of more specific and powerful strength exercises in geriatric rehabilitation programs. Objective: Objective: To evaluate the feasibility, safety, and preliminary effects of a 5-week jump-based training program, compared to a traditional multicomponent training program, on functional mobility and lower limb power in older adults.Study Design:Randomized feasibility trial with an unbalanced design. Methods: Randomized feasibility trial with an unbalanced design.Setting:Community-dwelling older adults. Participants:Forty-four (N=44) older adults (≥60 years; 43 women) were randomized into an experimental group (EG; n=35) and a control group (CG; n=9). Interventions:The EG performed a progressive jump-based training program (3x/week).The CG engaged in traditional multicomponent training (strength, endurance, balance).Main Outcome Measures: Feasibility (adherence and safety) and preliminary efficacy in functional mobility (Timed Up and Go-TUG), gait speed (4-Meter Walk - V4M), and lower limb power (Vertical Jump - VJ Results: The intervention proved feasible with high adherence and no adverse events. A Group × Time interaction for TUG approached significance (p = 0.137) with a large effect size (Cohen's d = 0.60) favoring the EG. Significant main effects of Time were found for TUG (p = 0.011) and V4M (p = 0.002 Conclusions: This study demonstrates that jump-based training is a feasible and safe modality for older women. Preliminary data suggest clinically relevant improvements in functional mobility, providing a basis for future large-scale randomized clinical trials. Clinical Trial: Brazilian Registry of Clinical Trials (ReBEC) under the identification number RBR-3vqhv5d.
Background: Although social media is often viewed by residents and could be used to reinforce teaching points, there is little data on methods that improve engagement in learning medical topics throug...
Background: Although social media is often viewed by residents and could be used to reinforce teaching points, there is little data on methods that improve engagement in learning medical topics through this medium. Objective: We observed how the timing of posted questions, answering questions correctly, and giving supportive comments affected the engagement of residents learning point-of-care ultrasound on social media. Methods: Of 60 medical residents, 35 followed an Instagram account that posted ultrasound video clips with questions during the academic year. Engagement, E, was the percentage of questions answered of the total number of clips viewed for each post and each resident. E was tested for an association with (1) weekend vs. weekday posts, (2) answering questions correctly vs. incorrectly, and (3) supportive responses from faculty vs. no feedback. Results: Of 16 posts, 120 questions were answered from 428 clips viewed by 32 residents, for an E =28% [range: 15-59%] for posts and a median (IQR) E=19% (0-39%) for residents with 71% (n=25) engaging on at least one post. E was higher during weekdays vs. weekends, 30% vs. 21% (p=0.007), and correlated to answering correctly vs. incorrectly (r=0.6, p<0.001). A supportive comment resulted in a lower percentage of answering the next post, compared to no feedback (30% vs. 71%, p=0.02). Conclusions: Resident engagement in social media was higher with having questions answered correctly, but, surprisingly, was lower when posting during weekends and immediately after receipt of a supportive comment.
Chronic diseases are the leading cause of morbidity, mortality, and health care spending in the United States, yet adherence to prescribed medications and engagement in stress-reduction behaviors and...
Chronic diseases are the leading cause of morbidity, mortality, and health care spending in the United States, yet adherence to prescribed medications and engagement in stress-reduction behaviors and low-level exercise remain persistently low. According to the Centers for Disease Control and Prevention, less than 25% of adults met the 2018 Physical Activity Guidelines for Americans [1]. Despite advances in pharmacologic therapies, only approximately 50% of adults with chronic conditions take medications as prescribed [2]. Chronic psychological stress further exacerbates disease progression and poor outcomes [3]. In addition, higher levels of regular physical activity are associated with lower rates of many chronic illnesses, such as obesity, diabetes, hypertension and cardiovascular disease [4]. Mobile health (mHealth) applications offer a scalable approach to supporting patient self-management, but sustained engagement and adherence remain key challenges. Behavioral economics provides a useful framework for addressing these challenges by leveraging principles such as loss aversion, present bias, reference points, and the strategic use of incentives [5].
This viewpoint synthesizes evidence across behavioral economics, medication adherence, mindfulness practice, and the importance of daily physical activity to inform the design of integrated mHealth applications. Incentive-based interventions, when structured appropriately, can meaningfully improve medication adherence and sustain behavior change beyond active intervention periods [6-7]. Randomized controlled trials of mindfulness-based mHealth interventions demonstrate meaningful improvements in stress, anxiety, quality of life, and related health outcomes, with evidence of dose–response effects tied to consistent practice [8-9]. Gamified and incentive-based approaches further enhance engagement in ongoing behaviors such as daily physical activity and mindfulness practice [10-11]. An mHealth application that leverages these principles to incentivize and monitor medication adherence, physical activity, and mindfulness practice has the potential to significantly improve patient health.
Background: Psychological skills training (PST) is a core component of sport psychology, supporting athletes’ performance, well-being, and capacity to manage competitive stress. However, access to h...
Background: Psychological skills training (PST) is a core component of sport psychology, supporting athletes’ performance, well-being, and capacity to manage competitive stress. However, access to high-quality, practitioner-led PST is often constrained by time, cost, availability of trained professionals, and stigma surrounding help-seeking. In response, digital interventions such as mobile applications, biofeedback systems, and immersive technologies have been increasingly adopted to deliver PST in more scalable and flexible formats. Despite rapid growth in this area, evidence regarding the promises and challenges of digital PST remains fragmented across modalities and outcome domains. Objective: This systematic review synthesizes empirical evidence on the use of digital interventions for delivering PST in athlete populations. Specifically, it maps the digital modalities employed, the psychological skills and frameworks targeted, the populations and sporting contexts studied, and the promises and challenges reported in relation to effectiveness, feasibility, and implementation. Methods: We conducted a PRISMA-compliant systematic review of English-language studies published between 2000 and 2025. Three databases (Scopus, Web of Science Core Collection, and ProQuest Dissertations and Theses) were systematically searched, and additional records were identified through a manual search. Eligible studies examined digital or technology-based interventions deployed to support PST outcomes in athlete populations and reported empirical quantitative, qualitative, or mixed-methods findings. Two reviewers independently screened records and extracted data, resolving discrepancies through discussion. Results: Thirty-six studies met the inclusion criteria, encompassing virtual reality-based interventions, mobile applications, and biofeedback or neurofeedback systems. Across modalities, digital PST interventions targeted a range of psychological skills, including stress and anxiety regulation, attentional control, imagery ability, self-talk, and emotional regulation. Reported promises included improvements in affective, cognitive, physiological, and performance-related outcomes, enhanced accessibility, flexibility, and engagement of PST delivery, and potential for skill transfer beyond sport. However, recurring challenges were also identified, such as limited personalization, variable user engagement, technical and cost barriers, and inconsistent or weaker efficacy relative to traditional PST methods. Conclusions: Digital interventions offer a meaningful extension to traditional PST by widening access, enhancing immersion, and providing real-time feedback that supports psychological skill development. However, their effectiveness appears constrained by methodological variability, limited personalization, and implementation challenges. Future research should prioritize rigorous longitudinal designs, clearer alignment with PST theory, and hybrid delivery models in which digital tools complement practitioner expertise, to ensure digital PST enhances rather than dilutes psychological practice.
Digital health is now embedded in routine care through patient portals, teleconsultations, remote monitoring, digital triage, and other hybrid service models. While these changes can improve access an...
Digital health is now embedded in routine care through patient portals, teleconsultations, remote monitoring, digital triage, and other hybrid service models. While these changes can improve access and efficiency, they may also create new barriers for older adults who have limited cognitive, sensory, functional, or social capacity to engage with digitally mediated care. Current constructs such as digital literacy, digital exclusion, and conventional frailty only partly explain this problem because they do not fully capture the mismatch between the digital demands of healthcare systems and the real world capabilities and supports available to patients.
This Viewpoint introduces digital frailty as a clinically relevant, multidimensional state of vulnerability that arises when a person’s intrinsic capacity and available support are insufficient to meet the digital requirements of healthcare. We argue that digital frailty should be understood not as a synonym for age, disability, or low digital confidence, but as a relational and potentially modifiable mismatch between individuals and care environments. Framing the issue in this way shifts attention from blaming patients to designing safer and more equitable systems.
To operationalize this concept, we propose a Digital Health Vulnerability Index as a pragmatic framework for identifying patients at risk of digitally mediated care failure. The framework focuses on four proximal domains of vulnerability, namely access, skills, confidence or trust, and support, and is paired with brief consideration of hearing, vision, and cognition to improve clinical interpretability. Rather than functioning as a static label, the index is intended as a routable mechanism to trigger proportionate responses such as assisted digital support, proxy enabled access, simplified workflows, and analogue alternatives for safety critical steps.
We further propose proportionate universalism as the most appropriate implementation principle, so that digital support is universal in reach but calibrated in intensity according to need. This approach has implications beyond individual assessment and extends to pathway design, procurement, governance, reimbursement, and digital inclusion policy. In ageing societies, digital vulnerability should be recognized as a determinant of functional access to care. A digitally inclusive health system therefore requires not only better technology, but also better identification, adaptation, and accountability for the patients most at risk of being left behind.
Background: Alcohol Use Disorder (AUD) affects Punjabi-American communities at disproportionately high rates, yet remains under-researched and under-treated. The "model minority" myth masks significan...
Background: Alcohol Use Disorder (AUD) affects Punjabi-American communities at disproportionately high rates, yet remains under-researched and under-treated. The "model minority" myth masks significant health disparities among Asian-American subgroups, and aggregated data collection practices obscure the unique cultural, historical, and structural factors shaping AUD in Punjabi-Americans. Cultural stigma, family dishonor (izzat), and a lack of culturally competent services create structural barriers to treatment. Even though evidence indicates the effectiveness of culturally tailored interventions, no rigorous studies have designed or validated intervention models specifically for Punjabi-Americans. Objective: This paper proposes a community-based, mixed-methods study to assess AUD prominence and identify barriers to care among Punjabi-Americans in the San Francisco Bay Area. Drawing on Bronfenbrenner’s ecological systems theory and the framework of structural competency, the study aims to generate empirical evidence for designing culturally informed, evidence-based interventions tailored to community needs, and emphasizing the need for further research. Methods: The study will employ a cross-sectional, mixed-methods design guided by Community-Based Participatory Research (CBPR) principles. A bilingual (English/Punjabi) anonymous survey will be administered to 88–100 self-identified Punjabi-American adults (ages 18+) in the Bay Area, using Web-Based Sampling, Community-Based Recruitment, and Respondent-Driven Sampling. The survey instrument includes 18 questions across 5 domains: demographics, alcohol consumption (AUDIT-C), acculturation (SL-ASIA), treatment attitudes, and macrosystem/microsystem factors. Quantitative data will be processed with SPSS by IBM, and analyzed using descriptive statistics, chi-square tests, and regression analyses. Qualitative data from open-ended responses will be analyzed using thematic analysis guided by structural competency and ecological systems theory. Results: As of February 2026, the study is in the design and community engagement phase. The survey instrument has been developed and is undergoing review. Institutional Review Board (IRB) approval will be sought from the University of California, Berkeley. Data collection is anticipated to conclude by May 2026. Conclusions: This study addresses critical gaps in the literature by applying a structural competency framework to AUD in a Punjabi-American context, connecting critical theory to public health, and using the historically successful Amrit Prachar movement as a precedent for community-based interventions. Findings will directly inform the development of a future Culturally Adapted Intervention and contribute to advocacy for disaggregated health data collection for Asian-American subgroups.
Background: Background: People experiencing homelessness (PEH) face high morbidity and unmet health care needs. To address these gaps, healthcare providers across the United States are increasingly ad...
Background: Background: People experiencing homelessness (PEH) face high morbidity and unmet health care needs. To address these gaps, healthcare providers across the United States are increasingly adopting “field medicine” models that deliver mobile health services in shelters, homeless encampments, mobile clinics, and other community settings. Despite their expanding use, systematic evaluations of field medicine programs remain limited. Objective: Objectives: This paper describes a protocol for a mixed methods evaluation of field medicine for PEH across Los Angeles (LA) County, California. Methods: Methods: PEH receiving field medicine were recruited into an ongoing longitudinal study of the LA County’s unhoused population; a subset of participants in the existing probability-sampled cohort serve as the comparison group. Using monthly online survey data and a quasi-experimental design, we examine who accesses field medicine, how patients use and perceive care, and its impact on health and service engagement. We also conduct participant observation of field medicine teams to document patient–provider interactions and care coordination and carry out semi-structured interviews with providers, patients, and non-patients. Quantitative survey and qualitative findings will be integrated to identify convergence, complementarity, and explanatory insights. Results: Results: Recruitment of PEH receiving field medicine occurred between August 2024 and October 2025. Among 847 individuals referred from field medicine, 749 were eligible and 436 completed the first monthly survey and were enrolled. For the comparison group, 902 of the 1,413 participants ever enrolled in the existing cohort completed a survey during the field medicine recruitment period and met eligibility criteria. Participant observation included 82 field visits (≈500 hours) across diverse service locations and more than 300 patient–clinician interactions. Semi-structured interviews were conducted with 15 providers, 23 field medicine patients, and 12 non-patients. Conclusions: Conclusions: This study represents one of the first large-scale mixed-methods evaluations of field medicine. With increasing health threats from criminalization, climate-related events, and other socioenvironmental hazards, field medicine may mitigate health risks and improve systems engagement for PEH. Findings will provide rigorous evidence to inform service delivery and policy decisions.
Background: Post-infectious cough (PIC) is a distinct subacute condition lasting 3 to 8 weeks, affecting 11% to 25% of patients following a respiratory infection. While recent reviews have addressed a...
Background: Post-infectious cough (PIC) is a distinct subacute condition lasting 3 to 8 weeks, affecting 11% to 25% of patients following a respiratory infection. While recent reviews have addressed acupuncture for chronic cough (>8 weeks), the efficacy and safety of these therapies specifically targeting the transient inflammatory pathophysiology of subacute PIC remain unsynthesized. Current pharmacological interventions often provide limited relief or carry adverse effects. Objective: This protocol aims to evaluate the efficacy and safety of needle-based acupuncture therapies for adults with subacute PIC, compared to conventional medication, sham treatment, or no treatment. Methods: We will search MEDLINE (via PubMed), Embase, CENTRAL, CINAHL, Scopus, AMED, SCI, and five Chinese databases from inception onwards. Randomized controlled trials (RCTs) involving adults (≥18 years) with PIC will be included. To avoid pharmacological confounding, acupoint injection will be excluded. Primary outcomes are the Leicester Cough Questionnaire (LCQ) total score and validated cough severity scales (e.g., Visual Analogue Scale). Two reviewers will independently screen studies, extract data, and assess the risk of bias using the Cochrane RoB 2 tool. A random-effects model will be used for meta-analysis, with results stratified by predefined comparison strata (acupuncture vs sham/placebo, active therapy, or add-on designs). Evidence certainty will be evaluated using the GRADE framework. Results: This protocol was registered in PROSPERO (CRD420251268158).
As of February 2026, preliminary database searches have been piloted. Formal study screening, data extraction, and evidence synthesis are scheduled to commence in April 2026. Conclusions: This systematic review will provide rigorously synthesized evidence exclusively for the subacute PIC population, offering targeted clinical guidance that is currently missing from broader chronic cough assessments. Clinical Trial: PROSPERO CRD420251268158; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251268158
Background: Fixed orthodontic appliances create new retentive niches that favour dental biofilm accumulation and are associated with enamel demineralization and periodontal inflammation. Beyond these...
Background: Fixed orthodontic appliances create new retentive niches that favour dental biofilm accumulation and are associated with enamel demineralization and periodontal inflammation. Beyond these clinical effects, orthodontic treatment may induce shifts in the oral microbial ecosystem (commonly referred to as dysbiosis), potentially influenced by individual host–microbiota interactions. Objective: This study aims to evaluate whether an intensive personalized prevention strategy, added to standard orthodontic care, limits orthodontic treatment–associated microbial dysbiosis, defined as longitudinal quantitative shifts from commensal toward pathogenic microbial complexes, during the first year of fixed orthodontic treatment, compared with standard care alone. Methods: This is a prospective, multicenter, randomized, open-label, parallel-arm interventional study with a 12-month follow-up. Eighty participants aged 12–20 years requiring fixed orthodontic treatment will be randomized in 1:1 ratio to receive either (i) standard care or (ii) standard care plus personalized prevention. The preventive intervention includes repeated oral health education sessions, supervised toothbrushing, dietary education, plaque disclosure, and chairside prophylaxis using a standardized professional protocol. Study visits are scheduled at appliance placement (baseline) and at 3, 6, 9, and 12 months (±10 days). The primary endpoint is the longitudinal change in quantities of selected bacterial and viral species in saliva and dental biofilm assessed using high-throughput microfluidic qPCR. Secondary endpoints include salivary inflammatory profiling (cytokines/chemokines), clinical indices (plaque index, bleeding on probing, white spot lesions), compliance measures (wear of oral hygiene devices and toothpaste consumption by weight), and participant satisfaction. Analysis will follow a modified intention-to-treat approach, complemented by per-protocol analyses. Results: The study will include a 6-month enrolment period with baseline data collection, followed by 12 months of participant follow-up. The total study duration is expected to be 24 months. Data analysis and reporting are planned upon completion of follow-up. Conclusions: PREPERMIO is a randomized interventional study evaluating a personalized preventive strategy during fixed orthodontic treatment using biologically driven longitudinal endpoints. By focusing on microbial homeostasis and bacterial community shifts, and integrating bacterial, viral, and host inflammatory markers, it addresses limitations of prior studies centered mainly on clinical or behavioural outcomes and has the potential to inform future personalized prevention strategies in orthodontics. Clinical Trial: NCT06752902
Background. Attention-Deficit/Hyperactivity Disorder (ADHD) affects approximately 3-5% of adults globally, characterized by inattention, hyperactivity, and impulsivity, causing substantial functional...
Background. Attention-Deficit/Hyperactivity Disorder (ADHD) affects approximately 3-5% of adults globally, characterized by inattention, hyperactivity, and impulsivity, causing substantial functional impairment across occupational, academic, and social domains. Associations between lifestyle factors, including physical activity patterns, sleep quality and duration, screen time behaviors, dietary intake patterns, anthropometric characteristics, and substance use, remain significantly underexplored in North African populations and require comprehensive international investigation, given the severe limitations in epidemiological data. Understanding these modifiable factors could inform evidence-based interventions to manage symptoms and improve function. Objectives. To (i) estimate adult ADHD prevalence in Tunisia and internationally stratified by presentation type and demographics, (ii) examine associations between comprehensive lifestyle factors and symptom severity across multiple domains, and (iii) employ machine-learning (ML) algorithms to identify complex non-linear patterns and interaction effects between lifestyle variables and ADHD symptomatology across diverse populations. Methods. This two-phase quantitative cross-sectional study will recruit approximately 5,000 Tunisian adults aged 18-65 years in Phase I, followed by 50,000 international participants across North Africa, the Middle East, Europe, and North America in Phase II. Data collection employs a dual-mode approach: Google Forms for digital administration and paper-based questionnaires for participants with limited internet connectivity, with mode selection determined by availability at the time of distribution. The assessment battery comprises validated instruments totaling approximately 130-132 items requiring approximately 28-32 minutes of completion time, including the Adult ADHD Self-Report Scale, the International Physical Activity Questionnaire-Short Form, the Pittsburgh Sleep Quality Index, the Smartphone Addiction Scale-Short Version, the Bergen Social Media Addiction Scale, the Screen Time Questionnaire, Short Food Frequency Questionnaire, and the novel Substance Use Assessment Scale. To accommodate the international sample, all instruments will be offered in English, French, and Arabic, allowing participants to choose their preferred language. Officially validated translations will be used where available. For instruments lacking a validated version, a standardized translation will be employed for this study, with subsequent psychometric validation planned. ML algorithms, including random forests, gradient boosting, and neural networks, represent the primary analytical approach, complemented by multivariable regression for association examination. Expected Outcomes. This protocol provides the first comprehensive adult ADHD prevalence estimates for Tunisia and establishes international baseline cross-cultural data enabling systematic comparisons across geographic regions and healthcare systems. ML identification of complex interaction patterns between lifestyle factors and symptom presentations represents the primary methodological contribution, revealing non-linear relationships and distinct phenotypic subgroups. Findings will inform the development of targeted lifestyle-based interventions addressing modifiable risk factors.
Background: Rapid urbanization in India is straining the government's capacity to provide basic amenities such as housing, sanitation, electricity, and water. One group of people who are deeply affect...
Background: Rapid urbanization in India is straining the government's capacity to provide basic amenities such as housing, sanitation, electricity, and water. One group of people who are deeply affected are menstruators where menstrual experiences are shaped by an interplay of deep-rooted cultural norms and emerging socio-political discourse, ranging from stigma to bodily autonomy. In urban slums, referred to as bastis in Telugu and Hindi, this reality is worsened by spatial congestion and limited water, sanitation, and hygiene (WASH) resources. This study is situated in the populous slum colonies of Film Nagar, Hyderabad, where residents navigate precarious living conditions and a scarcity of basic amenities. Despite the surrounding affluence of the media and technology sectors, approximately 80% of the local population resides in these 20 underserved settlements 1. Objective: In this context, we argue that menstrual experiences are profoundly shaped by an interplay of biological, socio-cultural, political, economic, and environmental factors. Accordingly, our research seeks to understand how these intersecting determinants manifest in everyday life to influence the lived reality of menstruators. Methods: Guided by biopolitics and poststructuralist feminism, we take a critical-ethnographic approach to analyze contextual factors shaping the menstrual experiences of slum-dwelling menstruators. This study uses a multi-method data generation strategy including rapport building, participant observations, focus groups, in-depth interviews, and digital storytelling. Frame analysis will be used for data analysis and will occur concurrently with data generation. Results: This proposal describes the research being conducted as part of the primary author’s doctoral dissertation. The doctoral program is funded by the Social Sciences and Humanities Research Council (SSHRC) Doctoral Fellowship (2023–2026), and the data collection component of the study is supported by the International Development Research Centre (IDRC) through the International Doctoral Research Award (2024–2025). The study has received ethics approval from the University of Toronto Research Ethics Board and the institutional ethics committee at the University of Hyderabad, India, in January 2025. As of September 2025, 18 households had been recruited. The final phase of data collection is scheduled for March 2026. Study findings are anticipated to be disseminated and published by September 2027. Conclusions: The novelty of this study is predicated on the use of a multi-method critical ethnographic study design. This study’s findings are expected to 1) highlight the interplay of socio-political, familial, and environmental factors affecting menstrual health and bodily autonomy, and 2) guide government bodies, research institutions, and NGOs in developing context-sensitive policies and programs for menstruators.
Background: Human–autonomy teaming (HAT) has the potential to reshape surgical practice by fostering true partnership between surgeons and intelligent systems. To achieve this, AI must move beyond s...
Background: Human–autonomy teaming (HAT) has the potential to reshape surgical practice by fostering true partnership between surgeons and intelligent systems. To achieve this, AI must move beyond static scoring to provide adaptive, real-time guidance based on reliable skill assessment. Objective: This scoping review aims to map the current landscape of machine learning (ML) methods for surgical skill assessment and evaluate their technical readiness for integration into functional HAT systems. Methods: Following PRISMA-ScR guidelines, we conducted a systematic search across three major scientific databases (PubMed, IEEE Xplore, and Scopus). We identified and analyzed 92 peer-reviewed studies published between 2019 and 2025. The review focused on data modalities (kinematics, video, biosignals), model architectures, and validation environments. Results: Our analysis of the 92 included studies reveals a dominant shift toward multimodal data integration and deep learning architectures. While high performance is frequently reported on benchmark datasets, significant barriers to HAT integration persist. We identified that a majority of current models lack interpretability and fail to demonstrate generalizability to real-world clinical settings. Furthermore, validation practices remain inconsistent, with limited evidence of adaptivity to individual user needs during live surgical workflows. Conclusions: Current AI techniques provide a robust foundation for objective skill assessment, but they are not yet ready for autonomous teaming. Future development must prioritize model robustness, interpretability, and seamless integration into clinical environments to transition from standalone assessment tools to effective surgical teammates. Clinical Trial: The protocol was registered at OSF Registries [https://doi.org/10.17605/OSF.IO/PQWS5]
Background: Delivering sustained lifestyle interventions for individuals with type 2 diabetes mellitus (T2DM) remains challenging. Digital health interventions may help overcome barriers related to ac...
Background: Delivering sustained lifestyle interventions for individuals with type 2 diabetes mellitus (T2DM) remains challenging. Digital health interventions may help overcome barriers related to access scalability and resource constraints. Objective: This study aimed to evaluate the effectiveness and feasibility of a 16-week national digital lifestyle intervention with health coaching for adults with type 2 diabetes mellitus in Brunei Darussalam, focusing on changes in glycemic control and health-related quality of life. Methods: Participants were recruited through both web-based and offline methods and enrolled in a 16-week digital intervention program that combined online education with offline health coaching. Participants continued their existing medications, without modification. Clinical outcomes (HbA1c, fasting blood glucose, BMI, waist circumference, lipid profile), lifestyle behaviors, and health-related quality of life (QoL) were assessed at baseline and postintervention. Results: A total of 102 of 122 participants (83.6%) completed the intervention. Mean HbA1c significantly decreased by 1.3%, fasting blood glucose by 1.7 mmol/L, BMI by 0.4 kg/m², and waist circumference by 2.0 cm (all P<0.001). Total cholesterol and triglycerides decreased by 0.4 mmol/L and 0.5 mmol/L, respectively (P<0.001). High completion rates and favorable participant feedback indicated strong feasibility and acceptability. Conclusions: This national digital intervention was associated with clinically meaningful improvements in glycemic control and QoL among individuals with T2DM in Brunei Darussalam. These findings support the potential role of scalable digital health interventions in strengthening diabetes care, particularly in resource-limited settings. Clinical Trial: MHREC/MOH/2022/4(1)
Background: Chinese-language discussions of complementary and alternative medicine (CAM) on social platforms provide an observable record of how commenters negotiate credibility, risk, and treatment i...
Background: Chinese-language discussions of complementary and alternative medicine (CAM) on social platforms provide an observable record of how commenters negotiate credibility, risk, and treatment integration in high-stakes cancer contexts. Objective: To identify the dominant information frames through which commenters validate and interpret cancer-related CAM information in Chinese-language YouTube comment discourse. Methods: We analyzed 2,416 publicly available comments from 12 Chinese-language YouTube videos about cancer and CAM (uploaded 2023-2025). After preprocessing, 2,403 comments were modeled using BERTopic with multilingual sentence embeddings (paraphrase-multilingual-MiniLM-L12-v2), UMAP dimensionality reduction, and HDBSCAN clustering. Topics were interpreted through a structured human-in-the-loop protocol, including iterative topic review and intra-coder consistency checks. Results: The initial model produced 152 topics; 30.4% (731/2,403) of comments were assigned to an outlier topic. After topic reduction and exclusion of non-substantive topics (eg, platform interaction, off-topic disputes), 30 topics (1,491 comments) were grouped into four frames: (1) cultural authority and access pathways, (2) experiential solidarity and community validation, (3) evidence negotiation through everyday regimens, and (4) negotiating biomedical risk and treatment integration. Conclusions: Credibility work in Chinese-language cancer CAM comment spaces is organized around culturally embedded validation logics beyond biomedical authority. Frame-aware information support (eg, epistemic metadata to distinguish experiential support from clinical guidance) may help commenters navigate mixed-evidence environments more safely without implying clinical endorsement.
Background: The growing volume of secure messaging within the patient portal has imposed significant demands on clinicians and contributed to burnout. Little is known about the characteristics of pati...
Background: The growing volume of secure messaging within the patient portal has imposed significant demands on clinicians and contributed to burnout. Little is known about the characteristics of patients who comprise high-volume message senders, and we lack a nuanced understanding of patient messaging intensity beyond measures accounting for sheer volume. Objective: Our objective was to characterize older adult patients (65+) with high secure messaging volume, examining both patient characteristics and other aspects of their messaging intensity such as messaging frequency, length, and messaging use relative to patient portal logins and healthcare encounters. Methods: We analyzed electronic medical record (EMR) and patient portal data from a large academic health system, encompassing 16,023 older adults who sent 199,952 messages over a 12-month period. We developed five measures to account for secure messaging intensity. Our primary measure of messaging intensity was based on message volume; high-volume message senders were identified using outlier analysis based on patients’ aggregate number of messages sent during the observation period. Additional measures of messaging intensity included identifying individuals with concentrated periods of messaging, message length (character count), a ratio of messages to portal logins and a ratio of messages to healthcare encounters. We compared sociodemographic characteristics, health status, and messaging intensity of high-volume secure messaging senders to other message senders. We also identified patients who were classified as high-intensity message senders based on all five measures of messaging intensity (‘super-senders’). Results: Of 16,023 older adult patients who sent at least one message during the observation period, 1,298 (8.1%) were classified as high-volume message senders; these patients accounted for 39.7% of total messages. High-volume message senders, compared to all other message senders, were more likely to be White (80.4% vs. 72.5%, p < 0.001), have higher comorbidity scores (2.6 vs. 1.8, p <0.001), and higher incidence of cancer (35.8% vs. 22.8%, p<0.001) and dementia (8.3% vs. 6.1%, p < 0.002). High-volume message senders were also more likely to be identified as having concentrated periods of messaging, to send longer messages, and to send more messages in relation to patient portal logins and healthcare encounters. A small subgroup of patients classified as high-volume senders were also classified as high-intensity across all four of the other measures of messaging intensity (59/1,298; 4.5%), the ‘super senders’. Conclusions: High-volume message senders represent a small but distinct group of older patients who send a disproportionate share of messages to clinicians. Triangulating multiple measures of messaging intensity can help provide additional context about patient messaging behavior and help to identify patients that may most benefit from targeted outreach while potentially easing clinicians' inbox workload.
Background: While telehealth has become a transformative tool enhancing healthcare accessibility and efficiency, adoption rates in China remain low. Chinese healthcare professionals’ low telehealth...
Background: While telehealth has become a transformative tool enhancing healthcare accessibility and efficiency, adoption rates in China remain low. Chinese healthcare professionals’ low telehealth adoption rates are poorly understood. Objective: Our study investigates the key factors influencing Chinese healthcare professionals’ intention to adopt and actual use of telehealth. Based on the results from estimating an integrated telehealth use framework, we also make recommendations for improving healthcare professionals’ telehealth adoption. Methods: Data on 10,372 healthcare professionals from the 2023 Xi’an Healthcare Worker Survey were analyzed, utilizing descriptive statistics (chi-square test, group differences), reliability testing (Cronbach’s α coefficients), Discriminant validity analysis (square root of average variance extracted) and fit tests. Based on our integrated telehealth use framework, structural equation modeling was employed to test hypotheses and path relationships, including multi-group analysis to examine demographic moderating effects. Results: Confirming our hypotheses on telehealth intention to use and actual use, the structural equation model showed strong fit indices. Key predictors of behavioral intention to use telehealth included effort expectancy, price value, performance expectancy, and social influence. Behavioral intention and facilitating conditions positively influenced actual use behavior, while demographic characteristics moderated specific relationships. Conclusions: Our study identifies critical factors influencing healthcare professionals’ adoption of telehealth, including performance expectancy, social influence, and facilitating conditions. It offers an integrated framework to assess behavioral intentions and provides practical insights for advancing telehealth implementation in China. Tailored strategies for diverse demographics and institutions are essential for promoting sustainable adoption. Clinical Trial: This study was reviewed and approved by the Biomedical Ethics Committee of Xi’an Jiaotong University (approval number: XJTUAE2646).
Background: Prostate cancer treatment increasingly emphasizes quality-of-life maintenance alongside oncologic control. Integrative Traditional East Asian medicine, including Traditional Chinese Medici...
Background: Prostate cancer treatment increasingly emphasizes quality-of-life maintenance alongside oncologic control. Integrative Traditional East Asian medicine, including Traditional Chinese Medicine, Traditional Korean Medicine, and Kampo medicine, has been used as an adjunctive approach for symptom management during cancer treatment. However, evidence regarding its effectiveness and safety across different disease stages remains heterogeneous and has not been comprehensively synthesized. Objective: This systematic review and meta-analysis aims to evaluate the clinical effectiveness, safety, and quality-of-life benefits of TEAM-based adjunctive therapies combined with standard conventional treatments in patients with prostate cancer. Methods: Randomized controlled trials will be identified through comprehensive multilingual searches of PubMed, EMBASE, CNKI, OASIS, RISS, and other databases from inception to January 2026. Two reviewers will independently screen studies, extract data, and assess risk of bias using the Cochrane Risk of Bias tool. Primary outcomes will include changes in prostate-specific antigen levels, symptom improvement, and survival outcomes. Subgroup analyses will be conducted according to disease stage (hormone-sensitive, castration-resistant, metastatic), treatment phase, and intervention type. Meta-analyses will be performed using RevMan 5.4, and the certainty of evidence will be assessed with the GRADE approach. Results: The study selection process will be illustrated using a PRISMA flow diagram. Where appropriate, pooled effect estimates will be presented using forest plots. Conclusions: This review will provide comprehensive evidence on the role of TEAM-based adjunctive therapies in prostate cancer care across disease stages and will inform the development of integrative oncology guidelines. Clinical Trial: PROSPERO CRD420251275137
Background: STEM fields drive socio-economic development; however, engineering disciplines continue to struggle with low enrollment and high dropout rates despite the career prospects. An important pr...
Background: STEM fields drive socio-economic development; however, engineering disciplines continue to struggle with low enrollment and high dropout rates despite the career prospects. An important predictor of academic performance and career persistence is self-efficacy, which is the belief in one’s ability to succeed. Enhancing self-efficacy through collaborative playful learning experiences may help mitigate these challenges. Objective: This study explores a collaborative game design with the intent to enhance self-efficacy regarding computational thinking skills among young students. Methods: Through five gameplay rehearsals, we analysed the conducted recordings and interviews through a qualitative methodology. Results: we found evidence on how cooperative gameplay enables verbal persuasion, vicarious learning and mutual problem-solving, which are known to be influential factors in self-efficacy development. From these evidence, we also contribute novel design lessons for the research community to explore collaborative environments in game design, such as 1) shared decision-making mechanics, 2) collaborative problem-solving through coordinated, cooperative and co-constructed tasks, 3) resource-sharing, and 4) a common goal with shared rewards. Conclusions: We expected this study is an advance in exploring cooperative gameplay as a way to improve self-efficacy towards a gender-balanced STEM environment.
Background: Malnutrition is a multifactorial and chronic condition, frequently developing gradually due to a combination of biological, functional, and psychosocial factors. Objective: This study inve...
Background: Malnutrition is a multifactorial and chronic condition, frequently developing gradually due to a combination of biological, functional, and psychosocial factors. Objective: This study investigates the impact of general function, oral health, and nutritional factors on the time to onset of malnutrition. Methods: This is a longitudinal study utilizing interRAI data from nursing home residents for the period 2020-2025. The interRAI instruments are standardized, internationally validated, and electronically supported assessment tools to facilitate real-time data capture, analysis, and clinical decision support. Cox proportional hazard models were employed to calculate the impact of several indicators on malnutrition Results: Baseline assessments from 1,633 residents (mean age 85.68±7.82, 65.22% female) were split into assessments with malnutrition at baseline (154 residents, 9.43%) or not (1,479, 90.57%). The samples differed significantly with higher proportions in the sub-sample with malnutrition at baseline: depressive symptoms (53%vs.34.2%, p=0.000), modified mode of nutrition (26%vs.11.6%, p=0.000), loss of appetite (22.5%vs.7.5%, p=0.000), chewing difficulties (17.5%vs.7.9%, p=0.000) and dry mouth (13.7%vs.8.0%, p=0.000). Survival analysis revealed significant results for loss of appetite (1.88; 1.24-2.83), chewing difficulty (1.803; 1.12-2.90), and cognitive impairment (1.67, 1.13-2.46). Residents with an adapted mode of nutrition also had a shorter mean time to malnutrition, although this factor was not significant in the Cox model. Conclusions: Survival analysis has not been applied to the study of malnutrition in older persons, although malnutrition often appears in survival models for hospitalization or morbidity. This study highlighted the critical role of modifiable risk factors, such as loss of appetite, chewing difficulties, and mode of nutritional intake, in accelerating the progression toward malnutrition among older adults in nursing homes. As these factors are preventable, timely screening using the electronic interRAI tools may foster identification of people at risk of malnutrition and prevention or treatment. Clinical Trial: not applicable
Background: Nowadays, Artificial Intelligence (AI) tools, such as ChatGPT, are increasingly used to provide health-related information. However, the accuracy of this information in dermatology, partic...
Background: Nowadays, Artificial Intelligence (AI) tools, such as ChatGPT, are increasingly used to provide health-related information. However, the accuracy of this information in dermatology, particularly regarding sun protection and skin cancer prevention, has not been assessed. Objective: This study aimed to evaluate the quality of ChatGPT-generated responses to common questions related to sun protection and skin cancer prevention by comparing them with guidelines from the American Academy of Dermatology (AAD). Methods: A set of nine commonly asked questions on sun protection and skin cancer prevention was submitted to ChatGPT. Each response was evaluated across four key domains: accuracy, completeness, clarity, and relevance. Scoring was based on alignment with AAD recommendations and assessed independently. Results: ChatGPT responses were accurate, clear, and relevant. Most answers closely matched the AAD’s guidance, although a few responses showed slight omissions concerning specific contextual details. Conclusions: While not a replacement for professional healthcare, ChatGPT provides valid and accessible information on skin cancer prevention. With regular and strong evaluation, its role in AI-based dermatological tools may become significant in supporting public health education. Clinical Trial: not applicable
Background: Healthcare systems face rising demand and persistent staff shortages, intensifying pressure on the quality and sustainability of care. Artificial intelligence (AI) is increasingly introduc...
Background: Healthcare systems face rising demand and persistent staff shortages, intensifying pressure on the quality and sustainability of care. Artificial intelligence (AI) is increasingly introduced to improve efficiency and decision support across clinical domains. While these tools promise operational gains, they can also reconfigure how physicians work, make judgments, and interact with patients, all elements of physicians’ craftmanship. However, most research emphasises technical performance rather than AI’s broader implications for physicians’ craftmanship. Objective: To explore how physicians define craftsmanship in medicine and how they perceive AI to influence its professional and personal dimensions, with the goal of deriving practical principles for responsible AI design and implementation in hospital care. Methods: We conducted a qualitative, exploratory study in two phases (December 2024–September 2025). Phase 1 involved semi-structured interviews with 20 physicians from different hospital types and diverse specialties within the Netherlands. Phase 2 comprised two focus groups with physicians, medical specialists in training, policymakers, and AI developers during a national symposium, using an interactive, persona based design to co create practical design principles. Results: Physicians described craftsmanship as their commitment to deliver the best possible care through human judgment, empathy, and contextual understanding. Perceived AI effects clustered in two areas:
1. Professional dimensions: AI was seen to support workflow efficiency, documentation, data integration, and aspects of analytical reasoning, potentially freeing time for patient contact and reflection. Conditions for adoption included human-in-the-loop oversight, explainability, traceability, and AI literacy.
2. Personal dimensions: empathy, contextual interpretation, and ethical judgment were viewed as inherently human and resistant to substitution. Concerns centred on de-skilling, less room for independent judgment, and threats to professional autonomy. Specialties differed in how they framed tasks and AI’s role, reflecting their specific clinical contexts, but all shared the same core aim of delivering high-quality care. Focus groups yielded six main design principles: start from real clinical needs; design AI as supportive, not intrusive; safeguard autonomy and trust; design for contextual diversity; strengthen the role of professional groups; and use AI as a mirror for reflection and learning. Conclusions: AI seems to affect the conditions of professional craftmanship, and thereby indirectly the personal dimensions of it. This should be considered in design and implementation, while recognising that continued interaction with AI may gradually reshape what craftsmanship itself comes to mean.
Background: Despite the many negative health outcomes associated with unhealthy screen media use, patient counseling by pediatricians in the primary care setting remains a challenge. The American Acad...
Background: Despite the many negative health outcomes associated with unhealthy screen media use, patient counseling by pediatricians in the primary care setting remains a challenge. The American Academy of Pediatrics (AAP) developed the Family Media Plan, but its online format may not lend itself to use in the time-constrained clinical setting. Objective: This exploratory study describes parent perceptions toward screen media use and family-perceived utility of the Healthy Media Use Contract (HMUC), a simplified, 1-page print version of the AAP HealthyChildren.org Family Media Plan for use in the clinical setting. Methods: A qualitative phenomenological approach was used to explore family experience with the HMUC. Families of children ages 6-13 years scheduled for primary care appointments were identified and consented in clinic prior to their appointment. During their appointment, their physician provided the family with the HMUC alongside standard screen media use counseling. Families were encouraged to post the HMUC in a common area within the home (e.g., on the refrigerator). Approximately one month after their clinic appointment, participating families completed a semi-structured interview via phone or Zoom to describe their experience using the HMUC. Families were provided a $25 gift card as a thank-you for participating. Interviews were transcribed and analyzed for themes. Results: Eight semi-structured, qualitative interviews were completed (English: 6, Spanish: 2). Saturation was assessed through ongoing, concurrent data analysis, and the point at which no new codes or themes emerged was confirmed through consensus among all team members. Qualitative coding and thematic analysis revealed the following screentime-related themes: 1) Parents identify both the benefits and drawbacks of screen media use, a sentiment we have labeled “nuanced neutrality,” 2) A lack of viable alternative activities is a major driver of screen media use, and 3) a lack of predictable family routines was a major factor for families who did not use the contract. Additionally, the following HMUC-related themes emerged: 1) The HMUC increased awareness of family screen time, 2) Delivery of screen media guidance was viewed favorably by families, 3) Rewards were valuable in prompting behavior change, and 4) The use of a contract promoted commitment. Conclusions: This study illustrates the nuanced perspectives contemporary parents hold regarding their children’s screen media use. Further, it delineates the specific attributes of the HMUC and its implementation that are perceived by families as effective.
Background: Many users have now switched to using short video platforms as the main channel in the search for skin health information. With the high internet penetration and the large market for the s...
Background: Many users have now switched to using short video platforms as the main channel in the search for skin health information. With the high internet penetration and the large market for the skincare industry, short video platforms have an important role in the "beauty discovery" process and purchase decisions. However, the increasing consumption of skin health content is also accompanied by the risk of misinformation, uneven content quality, and the dominance of creators who are not health professionals. Therefore, it is important to know what factors affect the use of short video platforms in the search for skin health information. Objective: By adopting the health belief model and media richness theory, this study aims to analyze the factors that influence the use of short video platforms in the search for skin health information. Methods: This study used a mixed-method by distributing an online survey on 603 respondents and interviews with 30 sources. Survey data was analyzed using the covariance-based structural equation modeling method and qualitative data was analyzed using the thematic analysis method. Results: The results of this study found that perceived usefulness, attitude, perceived severity, and perceived susceptibility have a direct influence on the behavior of seeking skin health information on short video platforms. This study also found that health content expressiveness and personalized health insights have a direct influence on perceived usefulness. In addition, upward skin comparison also has a direct influence on skin stigmatization. However, this study found that there was no effect of source credibility and skin stigmatization on skin health information seeking on short video platforms. Conclusions: This study can provide guidance for the development of more effective health communication strategies in the digital era by using short video platforms.
Background: Psychological stress is known to exacerbate dermatologic conditions such as acne, eczema, and compulsive skin behaviors. The COVID-19 pandemic introduced a global stressor with widely-expe...
Background: Psychological stress is known to exacerbate dermatologic conditions such as acne, eczema, and compulsive skin behaviors. The COVID-19 pandemic introduced a global stressor with widely-experienced psychosocial effects and potential impacts on skin health. This study analyzes U.S. public interest in stress-induced dermatologic conditions and psychodermatologic disorders with relevance to clinical dermatology practice during pre-pandemic, pandemic, and post-pandemic periods. Objective: To evaluate longitudinal trends in U.S. public interest in stress-induced dermatologic and psychodermatologic conditions before, during, and after the COVID-19 pandemic using Google Trends data. Methods: Relative Google search volume (RSV) was used as a proxy for public interest, given the search engine’s 5 trillion annual searches.1 RSV for the terms "skin picking", "trichotillomania", "rash", "eczema", "dermatillomania", and "anxiety skin picking" from 2018 to 2024 was obtained through the Google Trends database. Monthly search interest was normalized and averaged across pre-pandemic (2018-2019), pandemic (2020-2022), and post-pandemic (2023-2024) time periods. Results: Search interest increased for “skin picking” and “eczema” from 2018 to 2024. “Trichotillomania” RSV increased at least 33.33% from January 2020 to March 2021 relative to 2018-2019. “Dermatillomania” RSV increased 156.41% from April to May 2021. “Anxiety skin picking” RSV increased during the pandemic (2020-2022), with the largest average month-over-month change of 4.99% compared to other search terms. No consistent trend was observed for “rash.” Conclusions: Public interest in stress-influenced dermatologic conditions increased during the COVID-19 pandemic, suggesting heightened interest or prevalence of psychodermatologic issues during prolonged stress. These findings highlight opportunities for dermatologists to integrate mental health screening and psychodermatologic considerations into routine clinical care, particularly during periods of widespread stress.
Background: Harnessing longitudinal data for time to event analysis can provide valuable insights into disease progression and help plan clinical interventions for individual patients, with the goal o...
Background: Harnessing longitudinal data for time to event analysis can provide valuable insights into disease progression and help plan clinical interventions for individual patients, with the goal of improving clinical outcomes and quality of life. However, real-world clinical data is characterised by missingness, inconsistencies and heterogeneity, especially when datasets are aggregated from different sources. Objective: To address the challenges of missingness, inconsistency, and heterogeneity in multi-source data of degenerative disease, we propose a framework for explaining time-to-event predictions using multivariate longitudinal trajectories, applied to time-to-gastrostomy in patients with Amyotrophic Lateral Sclerosis (ALS). Methods: We analysed data from 8,586 ALS patients using a two-stage analytical approach. Joint latent class growth discrete-time survival analysis were used to identify data-driven reference trajectories of functional decline. New patient markers were mapped to these clusters using Fréchet distances. Three survival models (Cox PH, Cox XGBoost, and XGBoost Pseudo-Observation Regression) using baseline demographics and functional decline features were used to predict time-to-gastrostomy. Results: Distinct classes of functional decline revealed that rapid deterioration in bulbar and swallowing functions is the most critical indicator for intervention, reaching a 50% probability of gastrostomy within 16 to 18 months. Bulbar and swallow onset slopes were the primary predictors of time-to-gastrostomy. Predictive models utilizing the early "onset slopes" of functional decline outperformed those using baseline demographics alone, yielding a 0.044-0.069 increase in concordance index and decreasing median absolute error by 60-157 days compared to relying on diagnostic delay. The XGBoost MAEPO regression model utilizing onset slope was the best overall classifier, achieving a concordance index of 0.731 (IQR, 0.717-0.744) and a median absolute error of 218 days (IQR, 204-232). Additionally, all evaluated models comfortably outperformed a naïve classifier based on a 10% weight loss threshold. Conclusions: Our framework addresses clinical data heterogeneity through principled feature extraction and unsupervised trajectory mapping, translating individual predictions into interpretable clinical narratives that support timely gastrostomy decisions, and more generally time-to-intervention in degenerative diseases.
Academic–public health partnerships are essential for strengthening outbreak preparedness and response, yet translating modeling tools into routine public health practice remains challenging. Struct...
Academic–public health partnerships are essential for strengthening outbreak preparedness and response, yet translating modeling tools into routine public health practice remains challenging. Structural, technical, and workforce constraints often limit the capacity for modeling tools during emergencies. Here, we describe the development a software ecosystem designed to support real-time infectious disease response through sustained collaboration between an academic research team and public health agencies, with particular focus on the recent measles outbreak.
The resulting software enabled real-time scenario modeling, visualization of transmission dynamics, and iterative updates as new data became available. Beyond immediate outbreak response, the initiative strengthened cross-sector collaboration, expanded modeling capacity, and highlighted ongoing gaps in technical infrastructure and workforce readiness at the state and local levels.
This case study demonstrates how sustained academic–government partnerships combined with streamlined development practices can accelerate the translation of modeling tools into operational public health settings. Establishing and maintaining analytic infrastructure and agile processes between emergencies may be critical for ensuring timely, data-informed decision-making during future outbreaks.
Background: Dementia is a progressive, life-limiting condition in which care needs evolve from diagnosis through end of life, yet advance care planning (ACP) is often approached as a narrow, document-...
Background: Dementia is a progressive, life-limiting condition in which care needs evolve from diagnosis through end of life, yet advance care planning (ACP) is often approached as a narrow, document-focused end-of-life task. Digital ACP tools could help integrate ACP with earlier palliative care principles, but priorities for a comprehensive dementia-specific tool remain insufficiently defined. Objective: To identify and prioritize stakeholder-defined components and digital features for a comprehensive digital ACP (dACP) tool for dementia that integrates palliative care principles across the disease trajectory. Methods: We conducted an online survey with people with dementia, informal caregivers, and health care professionals recruited via Prolific in January–February 2025. Participants rated the importance of dementia-tailored ACP elements, caregiver-support features, preferred frequency of care-plan review prompts, and the importance of integration with existing health care systems; open-ended responses were analysed using thematic analysis supported by a large language model and refined by researchers. Principal component analysis was used to derive core domains, and group differences and correlations among central variables were assessed. Results: The final sample included 232 participants. PCA identified two interrelated ACP domains, the Comprehensive Care Planning and End-of-Life Preparation and Active Disease Symptom Management and a caregiver domain capturing Caregiver Stress Management. Across items, strategies for maintaining quality of life as dementia progresses received the highest ratings. Tips for managing challenging behaviours were the highest-rated caregiver-support feature. Participants also rated integration with existing health care systems as highly important and were in favour or context aware regular plan review. Qualitatively, decision-making support and guidance, communication/information sharing and documentation/record-keeping were the most frequently cited primary purposes of a dACP tool across all groups. Conclusions: Stakeholders prioritized a dACP tool that supports quality of life, ongoing symptom management, and actionable caregiver support, with strong interoperability to enable clinical use across care settings. These results provide practical design targets for developing an integrated, adaptive tool to support shared decision-making and continuity of person-cantered early palliative care in dementia.
Background: User-centered design (UCD) processes often generate extensive lists of potential software features, necessitating effective prioritization methods to guide development. The MoSCoW method,...
Background: User-centered design (UCD) processes often generate extensive lists of potential software features, necessitating effective prioritization methods to guide development. The MoSCoW method, categorizing features as Must Have, Should Have, Could Have, or Won’t Have, is widely used in software development for its simplicity and ease of adoption. Despite its popularity, limited evidence exists on its application within health informatics. The DEAN (Decision Aid Navigator) system, a clinical decision support tool, required prioritization of features for its administrative portal to ensure usability and alignment with health IT expert needs. Objective: To evaluate how health IT experts engaged with a MoSCoW-based prioritization process during the design of the DEAN Administrative Portal and to assess how discussion and interaction influenced feature prioritization outcomes. Methods: A 74 minute web-based group session included four health IT experts experienced in managing clinical decision support systems. Participants independently rated 30 proposed portal features using MoSCoW categories following brief discussions of each feature. Session transcripts, videos, and rating behaviors were analyzed using qualitative secondary analysis to identify themes describing how the MoSCoW process supported prioritization and expert engagement. Results: Participants rated 37% of features as Must Have, 13% as Should Have, 20% as Could Have, and none as Won’t Have. The remaining 30% showed distributed ratings without a clear majority. All participants adjusted their ratings, often prompted by group discussion or clarification from the technical lead. Qualitative analysis revealed four themes: (1) Interpretations of feature descriptions may vary; (2) Group discussion generated suggestions for alternative design solutions; (3) Group discussion identified implementation considerations; and (4) Participants considered differences in real world use vs. idealized use when ranking features. Conclusions: The MoSCoW method proved to be an efficient, intuitive, and engaging approach for prioritizing features in a UCD process for a clinical decision support system. Group discussion enriched the prioritization session by surfacing implementation insights and design refinements beyond numeric rankings alone. Findings support the use of MoSCoW as a practical tool for health informatics teams seeking structured yet flexible stakeholder engagement in software feature prioritization.
Background: Adolescence is a critical period in the development of mental health problems. Emotion regulation (ER) is a transdiagnostic mechanism implicated across diverse mental health problems, and...
Background: Adolescence is a critical period in the development of mental health problems. Emotion regulation (ER) is a transdiagnostic mechanism implicated across diverse mental health problems, and represents a promising target for early, scalable intervention. Self-directed digital cognitive behavioural therapy (CBT) interventions have the potential to extend access to mental health promotion and support; however, evidence regarding their acceptability, engagement and use among adolescents in real-world, non-clinical contexts remains limited. Objective: This study aimed to explore the acceptability, uptake and engagement of a self-directed digital CBT app targeting emotion regulation (MoodMission) among UK adolescents, and to identify subsequent early signals of change in both emotion regulation and mental health outcomes. Secondary objectives included assessing the feasibility of evaluating this type of intervention within a school setting. Methods: A convergent mixed-methods pre-post cohort design was employed. Adolescents were recruited from one secondary school in England and offered access to the MoodMission app for six weeks. Quantitative data included app uptake, in-app engagement metrics, study retention, and changes in emotion regulation and mental health, measured using the Difficulties in Emotion Regulation Scale (DERS), the Emotion Regulation Questionnaire (ERQ), and the Depression, Anxiety and Stress Scale (DASS). Generalised eta squared (η²G) effect sizes were calculated to explore the magnitude of change. Qualitative data were collected via semi-structured focus groups with adolescents who engaged and did not engage with the app and were analysed using inductive thematic analysis to capture experiences, perceived acceptability, and barriers to engagement. Results: Of 43 adolescents completing baseline measures, 11 (25.6%) downloaded the app and 8 (18.6%) completed at least one in-app activity. Participants spent a mean of 15.34 seconds (SD 10.99) per activity and reported moderate perceived usefulness (mean 6.99/10), with emotion- and behaviour-based activities rated as most helpful. Attrition was at expected levels for self-directed digital interventions, with 7 participants retained at the 6-week follow-up (overall attrition rate 83.7%). Qualitative findings highlighted four key themes: a preference for human and relational support over digital tools, difficulty engaging with the app during periods of high emotional intensity, the importance of personalisation and inclusivity, and the need for emotional clarity to use self-directed interventions effectively. Conclusions: Findings underscore the importance of co-design, personalisation, and integration of human support when developing digital mental health interventions for adolescents. Given increased messaging about the lack of safety in digital media and the growing bans on adolescent media use, future research should explore blended models of mental health promotion co-designed with adolescents, combining brief digital tools with face-to-face support from trusted adults or peers that are more appropriate and acceptable.
Background: Extensive documentation requirements, time constraints and an increasing burden of an aging population place increasing demands on the healthcare sector. Ambient scribe technology offers a...
Background: Extensive documentation requirements, time constraints and an increasing burden of an aging population place increasing demands on the healthcare sector. Ambient scribe technology offers a potential time-optimizing solution by transcribing spoken conversations into draft clinical notes in real-time. Objective: To assess the clinical utility of an ambient scribe tool in an outpatient physiotherapy department, with a focus on timesaving, documentation quality and employee satisfaction. Methods: Outpatient consultations were simulated to compare the ambient scribe tool against regular documentation practices. A head-to-head comparison was used for time difference. Quality was evaluated by the modified Physician Documentation Quality Index 9. Employee satisfaction was measured using the Technology Acceptance Model. Results: Across 13 cases, documentation time was reduced by an average of 35%. The documentation quality was rated 31.15 out of 35 points. Physiotherapists acknowledge ambient scribes potential usefulness. Conclusions: Ambient scribe significantly reduced documentation time and attained good documentation quality. However, there are some inconsistencies in employee satisfaction
Background: Type 2 Diabetes Mellitus (T2DM) affects approximately 590 million people worldwide, and its management relies heavily on patient education. With the emergence of online health information...
Background: Type 2 Diabetes Mellitus (T2DM) affects approximately 590 million people worldwide, and its management relies heavily on patient education. With the emergence of online health information and Artificial Intelligence Large Language Models (AI LLMs), patients are increasingly sourcing medical information independently. Objective: This study compares the quality, readability and reliability between traditional online resources and AI-generated information related to T2DM. Methods: Four predefined search terms were entered into three major search engines (Google, Yahoo, and Bing), and the top 20 search results were retrieved. Patient information AI-generated leaflets (AIGLs) were produced using a standardised prompt across four AI LLMs (ChatGPT, Gemini, DeepSeek, and Grok). Information quality was assessed using the DISCERN score and was calculated by the author and ChatGPT. The JAMA benchmark was used to measure reliability and transparency. The Flesh Reading Ease Score (FRES) and Flesh-Kincaid Grade Level (FKGL) were used to determine the readability and comprehension. Results: Eighty websites and four AIGLs were analysed, with author-rated mean DISCERN scores of 43.6 (±10.9) and 43.8 (±2.986), mean JAMA Benchmark scores of 2.74 (±0.965) and 0, mean FRES of 50.6 (±14.4) and 48.9 (±9.16), and mean FKGL scores of 8.66 (±2.23) and 8.3 (±1.92), respectively. The ChatGPT-rated mean DISCERN scores for websites and AIGLs were 58.5 (±11.5) and 61.0 (±2.94), respectively. Conclusions: Given the high prevalence of T2DM, both traditional online and AI-generated T2DM resources demonstrate suboptimal quality, accessibility, and transparency. Increasing patient reliance on digital health information calls for improved readability standards and stronger safeguards for AI-generated content. The landscape of medical consultations is evolving, with patients increasingly presenting with preconceived notions based on online health information; hence, healthcare professionals should adapt to this shift.
Background: Breast cancer-related lymphedema (BCRL) is a chronic condition requiring lifelong self-management. Patients often face barriers such as limited physical function, time constraints, low sel...
Background: Breast cancer-related lymphedema (BCRL) is a chronic condition requiring lifelong self-management. Patients often face barriers such as limited physical function, time constraints, low self-efficacy, and inconsistent information. Objective limb swelling assessment is critical for effective self-management; however, most patients lack reliable home-monitoring tools that integrate measurements with evidence-based feedback to drive behavioral change. Objective: We evaluated the feasibility, usability, and preliminary clinical efficacy of an integrated digital health intervention—a mobile app with evidence-based coaching features and a smart tape measure—designed to support objective self-monitoring and behavioral changes in patients with BCRL. Methods: A 3-month multicenter single-arm prospective study enrolled 58 female patients with BCRL. Participants used the "Second Doctor" app and smart tape measure for weekly arm volume monitoring. The application provides real-time visualization of the percentage of excess volume, self-management feedback, and coaching recommendations derived from clinical guidelines. Measurement validity was determined by correlating the self-measured volumes with Perometer (optoelectronic perometry) measurements. Feasibility was assessed based on retention and adherence rates. Usability was evaluated using a System Usability Scale (SUS) and a technology-acceptance model-based questionnaire. Clinical outcomes included limb volume changes, quality of life (LYMQOL), International Classification of Functioning, Disability, and Health (ICF) domains, and Transtheoretical Model (TTM) stages of self-management behavior. Results: The smart tape measurement demonstrated a strong correlation with the Perometer measurements (forearm r = 0.662, P < 0.001; whole arm r = 0.767, P < 0.001), validating its accuracy for home monitoring. Fifty-two participants completed the study (89.6% retention), with high adherence averaging 5.0 measurements monthly. Usability was good (SUS mean: 68.94, SD: 12.08), with high satisfaction scores (mean ≥ 4.0 on a 1–6 scale) across usefulness, ease of use, attitude, motivation, and recommendation willingness. Participants reported significant advancement in TTM stages (mean difference = −0.35, P = 0.02) and improvements in LYMQOL appearance (P = 0.04) and overall QoL (P = 0.03), alongside ICF improvements in body image (P = 0.03), physical endurance (P = 0.01), and muscle power (P = 0.02). The overall limb volume remained stable; however, subgroup analysis revealed that participants advancing in TTM stages (n = 13, 25%) maintained a stable volume, whereas those without behavioral progression (n = 39, 75%) showed significant increases (P = 0.03), demonstrating a direct link between behavioral engagement and clinical outcomes. Conclusions: An evidence-based coaching application integrated with validated self-monitoring is feasible, acceptable, and clinically meaningful for the self-management of BCRL. The intervention achieved consistent engagement, positive usability, and behavioral improvements, translating to volume stability among the engaged patients. This system enables patient-driven disease management and offers a scalable solution for lymphedema care by closing the loop between objective measurements, real-time feedback, and evidence-based guidance. Clinical Trial: This trial was registered at ClinicalTrials.gov (identifier: NCT06922513; initial release: March 19, 2025)
Background: Suboptimal transition of care after acute coronary syndrome (ACS) contributes to the persistently high risk of adverse cardiovascular events following hospitalization. Although mobile heal...
Background: Suboptimal transition of care after acute coronary syndrome (ACS) contributes to the persistently high risk of adverse cardiovascular events following hospitalization. Although mobile health interventions are growing rapidly, whether a mobile text messaging intervention can improve patient follow-up, other care processes, or outcomes after ACS has not been evaluated. Objective: To evaluate whether a hospital-initiated mobile health intervention for patients with acute coronary syndrome improves post discharge care processes and clinical outcomes. Methods: We conducted the TExting after Acute coronary syndrome discHarge (TEACH) pilot study, a single-centre, double-blind, randomized controlled trial. Patients hospitalized for ACS were randomized to receive either 12 weeks of motivational text messages or usual care alone. The primary outcome was an outpatient family physician or cardiologist visit (1 month, 3 months and 1 year) after discharge. Secondary outcomes included emergency department visits, all-cause rehospitalization, all-cause mortality, medication adherence, and cardiac testing. Results: The study cohort included 228 patients enrolled within 12 months, with 113 assigned to the texting group and 115 to the control group. The mean age was 61.5 years, 78.5% were men, and 53.9% were White patients. At 1 month, 75.2% of patients in the intervention group and 83.5% in the usual care group had a family physician follow-up (p=0.123). Cardiologist follow-up rates at 1 month were also similar between groups, with 74.3% in the intervention group and 73.0% in the usual care group (p=0.825). There were no significant differences in physician follow-up at 3 months and 1 year. Secondary outcomes including hospitalization, emergency department use, diagnostic testing, and medication adherence, were also not significantly different between the two groups. Conclusions: In this pilot randomized controlled trial, we demonstrated the ability to enroll ACS patients in a 12-week mobile motivational text messaging intervention. The texting intervention did not significantly improve rates of physician follow-up, medication adherence, or clinical outcomes in ACS patients. Clinical Trial: ClinicalTrials.gov ID NCT05628337
Background: Depression poses a global health challenge, and its early detection is critical for effective interventions. Recent studies reveal associations between digital traces of social behavior (e...
Background: Depression poses a global health challenge, and its early detection is critical for effective interventions. Recent studies reveal associations between digital traces of social behavior (e.g., phone calls) and depression, but rely on cross-sectional analyses, limiting insight into how these relationships evolve over time and obscuring the directionality between social behavior and mental health. Objective: This study investigates the longitudinal and directional relationships between digitally mediated social capital and depressive symptoms, leveraging phone call data to develop a data-driven framework for understanding depression over time. Methods: Eight weeks of data from 216 participants were analyzed using a dual-structural equation modeling (SEM) approach, including Latent Growth Curve Modeling (LGCM) and Cross-Lagged Panel Modeling. Digital social capital was operationalized through behavioral proxies capturing accessed social capital (e.g., incoming or outgoing calls) and latent social capital (e.g., missed phone calls), reflecting distinct mechanisms of social capital that are available and accessed online. Meanwhile, depressive symptoms were assessed using the Patient Health Questionnaire-4 (PHQ-4). Results: Latent growth analyses revealed that depressive symptoms were significantly associated with divergent trajectories of digital social capital. Higher baseline levels of depression were linked to changes in accessed social capital and declines in latent social capital. Growth in accessed social capital and depression were directly linked, indicating that increased levels of depression could amplify how often latent social capital becomes accessed. Cross-lagged panel analyses further corroborated these findings by showing that depressive symptoms in the beginning of the study were related to subsequent reductions in latent social capital, whereas prior levels of latent social capital were not significantly correlated to higher levels of depression. Conclusions: These findings advance the clinical understanding of depression by revealing that psychological health could actively influence patterns of social engagement. They also suggest that changes in social behavior could reflect changes in depressive symptoms. This study highlights the importance of longitudinal, data-driven approaches for interpreting digital social traces and underscores their potential for informing mental health scholarship and intervention strategies.
Background: Approach Bias Modification (ApBM) aims to target maladaptive approach tendencies toward substance-related cues and has increasingly been examined as an adjunctive intervention for substanc...
Background: Approach Bias Modification (ApBM) aims to target maladaptive approach tendencies toward substance-related cues and has increasingly been examined as an adjunctive intervention for substance use disorders, including nicotine use. Participants’ subjective experiences of ApBM are likely to influence both its effectiveness and successful implementation, yet systematic investigations remain seldom. Objective: The present study explores subjective training experiences reported by participants from two previously conducted randomized controlled trials (RCTs) evaluating virtual reality–based or smartphone app–based ApBM for smoking cessation (versus sham training), delivered over a 14-day training interval. Methods: Participants were invited to provide open-ended feedback immediately following the intervention and up to seven weeks after study entry (proximate feedback), as well as again four years later (long-term feedback). Thematic analysis was used to identify core themes related to the training experience. Results: In total, 104 of the 178 participants included in the RCTs provided feedback at least once (ApBM: n = 54; sham: n = 50). Four overarching themes were identified, with the fourth emerging only in long-term reflections: (a) perceived treatment effects, spanning beneficial changes and a perceived lack of effects; (b) mechanisms of action, including presumed working mechanisms and impeding factors; (c) feedback on the training and study experience; and (d) attribution of effects to training-specific or external factors. Exploratory frequency analyses indicated more favorable experiences in the ApBM group, whereas participants in the sham condition more often reported impeding factors, particularly difficulties understanding the training rationale. Similarly, descriptive quantitative findings suggested more positive training evaluations and smoking-related outcomes in the ApBM group. Conclusions: Overall, this study provides important insights into how ApBM for smoking is experienced both shortly after training and years later, underscoring the importance of incorporating participants’ perspectives into the development and evaluation of ApBM interventions.
Background: Reports of psychotic symptoms emerging or worsening in the context of sustained interaction with artificial intelligence (AI) chatbots have prompted the rapid adoption of the term “AI ps...
Background: Reports of psychotic symptoms emerging or worsening in the context of sustained interaction with artificial intelligence (AI) chatbots have prompted the rapid adoption of the term “AI psychosis” across clinical practice, media discourse, legislation, and litigation. Despite this activity, no published work has systematically examined how the concept is defined, whether competing definitions converge, or whether existing framings are sufficient to support clinical screening, research classification, or regulatory action. Objective: This study aimed to map the definitional landscape of AI psychosis across disciplines and time, identify core characteristics and areas of consensus and divergence, and evaluate whether existing conceptualizations provide operational criteria sufficient for clinical or research applications. Methods: Rodgers’ evolutionary concept analysis method was employed to examine how concepts develop and change across disciplinary and temporal contexts. A proportionate search across PubMed, PsycINFO, and Web of Science yielded 306 unique entries, supplemented by citation chaining. Purposive sampling across six disciplinary strata (clinical psychiatry, digital mental health, AI safety, media studies, phenomenological psychopathology, and public health) identified 55 to 65 relevant papers (the range reflects boundary cases where relevance to the definitional question was ambiguous), with 14 anchor texts analyzed in full. Data were extracted for all Rodgers framework elements: temporal evolution, disciplinary variation, core characteristics, antecedents, consequences, synonymous constructs, related concepts, and paradigmatic instances. Results: The concept evolved through four phases in less than three years: pre-concept substrate (before 2023), hypothesis formation (2023–mid 2025), clinical recognition and naming (mid–late 2025), and mechanistic modeling with institutional response (late 2025–2026). Six competing framings coexist: AI psychosis as a new diagnostic entity, as a contextual evolution of existing psychosis, as historical continuity, as engineering failure, as an emergent property of human-AI interaction, and as a spectrum of causal involvement. Five core characteristics were identified: delusional content, incorporation of AI, bidirectional reinforcement loops, anthropomorphic misattribution, epistemic destabilization, and social substitution with withdrawal. Nine synonymous terms are in concurrent use, each encoding different causal assumptions. Despite this definitional proliferation, no operational case definition, validated screening instrument, dose-response threshold, prevalence estimate, severity classification, or diagnostic criteria exist for any version of the concept. The concept is already deployed in legislation, litigation, corporate safety, and clinical screening—none of which rest on an agreed-upon operational definition. Conclusions: AI psychosis presents a measurement paradox: it is simultaneously over-defined (six competing framings in less than three years) and under-operationalized (no framework sufficient for screening, diagnosis, or research eligibility). This synthesis proposes a working definition that accommodates existing causal taxonomies, centers human–AI interaction as the unit of analysis, and identifies eight measurable dimensions suitable for instrument development. Until the field resolves this paradox, the concept will continue to be used in policy, practice, and litigation without the definitional foundation those applications require. Clinical Trial: NA
Background: Since the COVID 19 pandemic, health care and health information seeking have become increasingly digitally mediated. It remains unclear whether eHealth literacy is consistently associated...
Background: Since the COVID 19 pandemic, health care and health information seeking have become increasingly digitally mediated. It remains unclear whether eHealth literacy is consistently associated with health behaviors across different behavioral functions and social contexts in the post COVID 19 era. Objective: To synthesize post COVID 19 evidence on the association between eHealth literacy and health behaviors and to examine whether this association varies by health behavior domain, country income context, and population age structure. Methods: We conducted a PRISMA 2020 compliant systematic review and meta analysis registered in PROSPERO (CRD4201009048). PubMed, Embase, and the Cochrane Library were searched from inception to January 28, 2026. Observational studies were eligible if they assessed eHealth literacy using a validated instrument with an explicit score, measured health behavior outcomes that could be classified as health decision making, health promotion, or health management, and collected data in 2020 or later or explicitly reported the timing of data collection. Odds ratios and correlation coefficients were synthesized separately using random effects models with Hartung Knapp adjustment. Funnel plots and trim and fill were used to assess small study effects. Subgroup differences were tested using between group heterogeneity statistics. Studies with non comparable outcomes were summarized narratively. Results: Twenty two studies met the inclusion criteria, including 15 studies contributing quantitative effect estimates and 7 studies summarized narratively. Overall associations were directionally positive, with substantial heterogeneity and sensitivity to small study effects. Behavioral domain was the most consistent source of between study variation across effect size frameworks. Income context moderated associations in the correlation based synthesis, whereas age structure did not show significant moderation. Narrative evidence was most consistent for health decision making outcomes, more mixed for health promotion outcomes, and more variable and generally weaker for health management outcomes. Conclusions: In post COVID 19 studies, eHealth literacy is generally associated with health behaviors, but the strength and consistency of this relationship vary across behavioral domains and settings. Future longitudinal and intervention research using more comparable behavior measures is needed to clarify directionality and to inform context tailored strategies for improving eHealth literacy and health behavior.
Background: Patient-reported outcomes measures (PROMs) have become an important tool in measuring a patient’s health status from their own perspective; however, they are typically measured using sta...
Background: Patient-reported outcomes measures (PROMs) have become an important tool in measuring a patient’s health status from their own perspective; however, they are typically measured using standardized questionnaires which do not account for each patient's unique experience of health. Recent improvements in Natural Language Processing (NLP) provide new possibilities to extract PROM scores from unstructured or free-text patient narratives; however, the feasibility and minimal data requirements needed to accomplish this task remain uncertain. Objective: To assess the practicality of transformer-based models for predicting EuroQol EQ-5D-3L scores from patient narratives and to evaluate minimum data requirements, narrative length and data augmentation effects. Methods: This proof-of-concept study used synthetically generated patient narratives to evaluate methodological feasibility. Three transformer models (BERT, BioBERT, DistilBERT) were fine-tuned for regression from patient narratives representing all 243 EQ-5D-3L health states. The performance of the models in various scenarios including a range of sample sizes (n=100–850), narrative length (100–1000 words), and data augmentation conditions were compared. The performance of the models was assessed through fivefold cross-validation and additional validation on datasets created by ChatGPT and DeepSeek. Results: Each model was able to predict EQ-5D-3L scores using each of the different configurations of data (n=100-850 patients; 100-1000-word narratives). However, optimal results were obtained when training the models with 100-word narratives derived from the largest number of people (n=850), where mean squared error=0.03 (95% CI: 0.02-0.04), mean absolute error=0.13 (95% CI: 0.13-0.15), explained variance=0.77 (95% CI: 0.64-0.77), and intraclass correlation coefficient=0.85 (95% CI: 0.81-0.87). Furthermore, it was found that the shorter narratives (100 words) performed better than longer narratives (100-1000 words). Additionally, the use of data augmentation improved the predictive performance. Conclusions: Transformer models show promise in predicting EQ-5D-3L PROM scores from synthetic patient generated narratives, with a minimum of 250 patients providing around 100-word narratives required for reliable performance. The work provides both a methodological basis and empirical standards for AI-based PROM systems. However, clinical implementation will require validation using real patient-authored narratives prior to adoption. If validated, the use of this approach could provide evidence to support the inclusion of a patient's experience as a narrative into standardized outcome measures and support patient-centred healthcare evaluations.
Global transformations, including demographic aging, climate-related health risks, and rapid technological acceleration are reshaping health systems and the competencies required of future healthcare...
Global transformations, including demographic aging, climate-related health risks, and rapid technological acceleration are reshaping health systems and the competencies required of future healthcare professionals. Yet current curricula often struggle to integrate these complex challenges in a coherent and future-oriented manner. This Eye Opener highlights the potential of the Inner Development Goals (IDGs) as an underutilized conceptual framework for enriching competency-based education in the health professions. The IDGs emphasize five dimensions: Being, Thinking, Relating, Collaborating, and Acting that align with key professional capacities such as self-awareness, systems thinking, empathy, interprofessional teamwork, and ethical action. Drawing on examples from geriatric care, climate-adapted practice, and AI-supported clinical reasoning, we illustrate how IDG-aligned learning outcomes can complement existing competency frameworks by fostering inner capacities essential for clinical judgement and person centered care. At the same time, we provide a critical reflection on potential risks, including over individualization of responsibility, insufficient attention to structural determinants of health, and tensions with assessment-driven educational cultures. Rather than proposing IDGs as a complete solution, this article argues that they offer a valuable conceptual entry point for rethinking how health professions education can prepare learners for the uncertainties, ethical complexities, and interdependencies of contemporary healthcare. The IDGs can help open new pedagogical and conceptual spaces, encouraging educators to design learning environments that support both technical proficiency and the inner capacities needed for navigating an increasingly complex world.
Background: Parents of children with autism spectrum disorder (ASD) often experience elevated levels of stress and psychological distress. In Hong Kong, cultural norms regarding emotional suppression...
Background: Parents of children with autism spectrum disorder (ASD) often experience elevated levels of stress and psychological distress. In Hong Kong, cultural norms regarding emotional suppression may exacerbate these challenges. Acceptance and commitment therapy (ACT) offers a promising approach by targeting psychological inflexibility. However, its efficacy and specific mechanisms of change within Chinese cultural contexts, particularly when delivered via online formats, remain under researched compared with traditional cognitive therapy (CT). Objective: To evaluate the efficacy of a brief, 3-session online ACT workshop in reducing parental stress and improving general well-being among Chinese parents of children with ASD compared with an active online CT control and a passive waitlist control, and to determine if reductions in psychological inflexibility mediated these therapeutic outcomes. Methods: A 3-arm randomized clinical trial was conducted with 60 parents of children with ASD (mean age, 7.5 years) in Hong Kong. Participants were assigned to online ACT (n = 24), online CT (n = 23), or a waitlist control group (n = 13). The interventions consisted of 3 weekly 1.5-hour synchronous group sessions delivered via Zoom. Primary outcomes were general well-being (General Health Questionnaire-12) and parental stress (Parenting Stress Index–Short Form). Process variables included psychological flexibility and cognitive distortions. Data were analyzed using analysis of covariance and mediation analysis with bootstrapping (5000 resamples). Results: The online ACT group demonstrated significantly better general well-being at post-test compared with the waitlist control (P = .02) and the CT group (P = .03). Similarly, parental stress was significantly lower in the ACT group compared with the waitlist (P = .04) and CT (P = .01) groups. No significant differences were found between the active CT control and the waitlist control. Mediation analysis revealed that the reduction in psychological inflexibility significantly mediated the relationship between the ACT intervention and improvements in both parental stress (95% CI, -0.56 to -0.06) and general well-being (95% CI, -0.36 to -0.03). Cognitive distortions did not serve as a significant mediator for either outcome. Conclusions: A brief, online ACT intervention is effective in reducing stress and improving well-being among Chinese parents of children with ASD. The findings confirm that the intervention works through the theoretical mechanism of reducing psychological inflexibility, even when delivered remotely. This suggests that low-intensity, online ACT is a scalable, cost-effective, and culturally adaptable solution for supporting caregivers who may face barriers to traditional face-to-face therapy. Clinical Trial: N/A
Background: Rosacea is a chronic, visible inflammatory skin condition that often requires complex, long-term treatment regimens. As patients navigate these therapies, they increasingly turn to online...
Background: Rosacea is a chronic, visible inflammatory skin condition that often requires complex, long-term treatment regimens. As patients navigate these therapies, they increasingly turn to online forums to share experiences and seek clarification on treatment use and side effects outside of the clinical setting. Objective: To identify and categorize real-world concerns regarding FDA-approved rosacea therapies as discussed within a large online patient community. Methods: A thematic analysis was performed on one year of posts from the r/Rosacea subreddit (70,000+ members) mentioning nine FDA-approved medications. Posts were categorized into domains including medication use, adverse effects, and barriers to access. Results: Discussions centered on practical application (order and frequency) for topicals, gastrointestinal and photosensitivity concerns for oral doxycycline, and anxiety regarding rebound erythema for alpha-adrenergic agonists. Concerns over insurance coverage and medication costs were universal across most therapy classes. Conclusions: Digital health communities reveal specific educational gaps, particularly regarding the practical integration of topicals and the management of side effects, that offer clinicians clear targets for improving patient counseling and treatment adherence.
Background: Patient-reported outcome measures (PROMs) and shared decision-making (SDM) are increasingly valued in Pediatric physiotherapy (PPT). Online PROM portals can facilitate PROM use and SDM, bu...
Background: Patient-reported outcome measures (PROMs) and shared decision-making (SDM) are increasingly valued in Pediatric physiotherapy (PPT). Online PROM portals can facilitate PROM use and SDM, but require adaptation for its use in PPT. Objective: This study aimed to adapt the online KLIK PROM portal for primary PPT, identify preferences for data visualization, and explore integration of SDM. Methods: A co-design approach was used. Two co-creation sessions including adolescents, parents, patient representatives, PPTs, and researchers were organized and results were discussed in an analyze-session with the research team. Subsequently, a demo version of the adapted KLIK portal was tested for usability in twelve individual think aloud sessions with parents, adolescents, and PPTs. After discussing results in a second analyze-session, the final version of the KLIK PROM portal was developed. Thematic content analysis was applied to all qualitative data. Results: Key adaptations included automatically selecting predefined PROM sets based on the patient registration form depending on complaints and age, and the possibility to schedule a series of PROMs linked to evaluation moments. Literal responses on items without color coding were preferred by patients and parents, while PPTs favored line graphs with heatmaps indicating concerning scores. Both patients and PPTs emphasized the importance of discussing results in person using child-friendly visualizations. Aggregated data were valued for supporting reflective practice. SDM was integrated into the portal through information pages, subtle nudges to encourage PPTs and patients to engage in SDM, and by motivating patients to complete PROMs by personalizing the portal. Conclusions: The adapted KLIK portal is ready for pilot implementation in primary PPT. Updates should be applied based on user feedback from ongoing evaluations. While PROM use can facilitate SDM, impact on SDM depends on effective patient-clinician dialogue and should be further investigated.
Background: Climate and weather factors of temperature and humidity are widely reported triggers of xerosis (dry skin), a common inflammatory skin condition and frequent driver of pruritus (itchy skin...
Background: Climate and weather factors of temperature and humidity are widely reported triggers of xerosis (dry skin), a common inflammatory skin condition and frequent driver of pruritus (itchy skin) and reduced quality of life. Growing evidence supports links between environmental conditions and skin barrier function, with extreme climates associated with increased atopic dermatitis–related clinical visits. Mechanistically, temperature and humidity affect the stratum corneum, the skin’s primary permeability barrier, with low humidity and high temperature increasing transepidermal water loss and promoting cutaneous inflammation. This study examines the relationship between climate, namely temperature and humidity, and the general public’s experience in dry skin and moisturizing products, throughout the United States. Objective: This study sought to address gaps in traditional epidemiologic approaches by linking climate conditions with population-level online search behavior related to dry skin and moisturizer use across the United States.
Publicly available climate data were obtained from the National Oceanic and Atmospheric Administration (NOAA), including average temperature and dew point by state over a recent nine-year period (2016–2025). Dew point served as a proxy for ambient humidity. Google Trends was used to assess relative search interest for five dry skin– and moisturizer-related terms by state during the same period. Search interest was normalized per million residents, and associations between climate variables and search interest were evaluated using linear regression analyses. Methods: Publicly available climate data were obtained from the National Oceanic and Atmospheric Administration (NOAA), including average temperature and dew point by state over a recent nine-year period (2016–2025). Dew point served as a proxy for ambient humidity. Google Trends was used to assess relative search interest for five dry skin– and moisturizer-related terms by state during the same period. Search interest was normalized per million residents, and associations between climate variables and search interest were evaluated using linear regression analyses. Results: Lower average temperatures and lower dew points were associated with higher dry skin–related search interest, while warmer, more humid states showed lower interest. Both temperature and dew point demonstrated significant negative associations with Google search interest. Conclusions: Population-level search behavior related to xerosis reflects climate-related dermatologic burden nationally patterns.
Background: Family caregivers of children with chronic health conditions experience significant physical and mental health burdens, including burnout, anxiety, depression, fatigue, and sleep disturban...
Background: Family caregivers of children with chronic health conditions experience significant physical and mental health burdens, including burnout, anxiety, depression, fatigue, and sleep disturbances. Despite this growing need, validated digital mental health tools tailored specifically to this population remain limited. Conversational agents powered by artificial intelligence (AI) offer a promising avenue for delivering on-demand, personalized mental health support, yet evidence-based development and evaluation of such tools for family caregivers is lacking. Objective: This study aimed to systematically describe the iterative development of COCO (Caring of Caregivers Online), a conversational agent designed to deliver Problem-Solving Therapy (PST) integrated with Motivational Interviewing (MI) principles, and to evaluate its usability and preliminary effect on the emotional well-being of family caregivers of children with chronic health conditions. Methods: COCO was developed and refined across four phases. Therapeutic dialogues were grounded in PST and MI principles and informed by evidence-based caregiver personas. The Wizard-of-Oz (WOZ) method was used across phases to iteratively collect naturalistic dialogues and refine COCO's conversational design. Usability was assessed using the System Usability Scale (SUS) and the Post-Study System Usability Questionnaire (PSSUQ). Caregiver emotions were measured pre- and post-session using six subscales of the Positive and Negative Affect Schedule - Expanded Scale (PANAS-X). In the final phase, a large language model (LLM)-powered version of COCO was developed using GPT-4 with few-shot learning and evaluated using persona-based methods. Results: COCO achieved a mean SUS score of 75.6%, reflecting acceptable usability. Participants demonstrated significant improvements in negative affect, sadness, guilt, fatigue, and serenity following PST sessions (p ≤ 0.03). Analysis of MI techniques across all phases revealed progressive refinement in conversational quality, with the LLM-powered COCO achieving the highest density of MI techniques per turn (2.56) and greater balance across MI strategy types, particularly in seeking collaboration and reflection. Conclusions: COCO is a feasible, usable, and preliminarily efficacious conversational agent for supporting the mental health of family caregivers of children with chronic conditions. The iterative, human-in-the-loop development approach was instrumental in producing empathetic, therapeutically grounded responses. The systematic development and evaluation process described here can serve as a guide for similar conversational agent intervention studies. Future work will explore multi-agent architectures and retrieval-augmented generation (RAG) to further enhance personalization, controllability, and scalability toward clinical deployment.
Neonatal jaundice remains a preventable cause of neurological damage in low- and middle-income countries, where limitations in infrastructure and staffing make timely screening difficult. Mobile techn...
Neonatal jaundice remains a preventable cause of neurological damage in low- and middle-income countries, where limitations in infrastructure and staffing make timely screening difficult. Mobile technologies, such as the Picterus Jaundice Pro (JP) app, offer a promising alternative by enabling bilirubin estimation from digital images using algorithms and calibration cards. In this viewpoint, we explore the feasibility, clinical validation, and barriers to adopting this tool in resource-limited settings, particularly in Latin America. The Mexican experience is presented as a reference for the gradual integration of mHealth technologies into public systems, highlighting both opportunities and regulatory, operational, and cultural challenges. Available evidence supports its utility; however, scaling will depend on political will, sustained financing, and clear regulatory frameworks. Picterus JP may represent a strategic step toward equity in neonatal health.
Background: E-learning and online teaching have received widespread acceptance considering their potential to improve students' capacity to overcome time and space barriers. Objective: This study aims...
Background: E-learning and online teaching have received widespread acceptance considering their potential to improve students' capacity to overcome time and space barriers. Objective: This study aims to assess the reliability and psychometric evaluation of a questionnaire measuring nursing students’ perceptions of e-learning, achievement motivation, and adoption feasibility in Kuwait Methods: A cross-sectional approach was conducted between November 1, 2024, to January 30, 2025, involving a convenience sample of 208 student nurses. A structured questionnaire was administered to examine concepts including perceptions of e-learning, achievement motivation, and adoption feasibility Results: Achievement motivation, ease of use, and perceived usefulness strongly influence students’ attitudes toward e‑learning, but infrastructural challenges hinder adoption. Enhancing institutional support and digital resources is essential to realize its full potential in nursing education. Conclusions: Achievement motivation, ease of use, and perceived usefulness strongly influence students’ attitudes toward e‑learning, but infrastructural challenges hinder adoption. Enhancing institutional support and digital resources is essential to realize its full potential in nursing education. Achievement motivation, ease of use, and perceived usefulness strongly influence students’ attitudes toward e‑learning, but infrastructural challenges hinder adoption. Enhancing institutional support and digital resources is essential to realize its full potential in nursing education. Achievement motivation, ease of use, and perceived usefulness strongly influence students’ attitudes toward e‑learning, but infrastructural challenges hinder adoption. Enhancing institutional support and digital resources is essential to realize its full potential in nursing education. Clinical Trial: NON
Background: Mental health conditions, including depression, anxiety, and psychological distress, are prevalent among the aging population and affect their health, functioning, and quality of life. Acc...
Background: Mental health conditions, including depression, anxiety, and psychological distress, are prevalent among the aging population and affect their health, functioning, and quality of life. Access to proper and high-quality mental health treatment is necessary; however, mental health treatment and care remain underused due to stigma, workforce shortages, cost, and mobility limitations. Digital mental health interventions (DMHIs) are emerging as a promising strategy to improve the accessibility and effectiveness of mental health services for older adults, but older adults have historically been underrepresented in DMHI development and evaluation. Additionally, how effective different types of DMHIs are and how age-centered design approaches influence outcomes remain underexplored. Objective: This scoping review mapped and synthesized evidence on digital mental health interventions (DMHIs) focused on adults aged 50 and older and identified gaps in the evidence base related to study design, age-related adaptations, and clinical outcomes. Specifically, we examined (1) the technologies and therapeutic approaches used, (2) the outcomes and effectiveness of DMHIs, and (3) age-centered adaptations and their outcomes. Methods: This scoping review searched for studies focusing on DMHIs for older adults across PubMed, PsycINFO, Scopus, Ageline, and Web of Science published from 2000 to February 2025. Eligible studies evaluated or described the design of DMHIs targeting mental health conditions among adults aged 50 years or older. Two rounds of independent screening and data extraction were conducted by multiple reviewers. Extracted data included study design, sample characteristics, intervention features, technologies used, age-related adaptations, and clinical outcomes. Results: Seventy-two studies met the inclusion criteria, of which thirty-six were randomized controlled trials, and fifty-four reported clinical outcomes. Web-based cognitive behavioral therapy (CBT) was the most commonly used approach, followed by games, virtual reality, mobile apps, chatbots, and robots. Fifty-four studies reported clinically effective outcomes, most commonly reductions in depression, anxiety, or psychological distress. However, only one-third of studies incorporated age-centered design adaptations or co-design approaches, such as simplified interfaces, larger fonts, age-relevant content, or participatory development with older adults. Conclusions: Among studies reporting effective outcomes, DMHIs can reduce depression, anxiety, and psychological distress. However, with only half of the included studies using randomized controlled trial designs, the overall evidence base remains moderate. In addition, age-adaptive design remains underdeveloped. Future research should strengthen trial designs and systematically examine how usability and age-centered adaptations influence DMHI effectiveness.
Background: Childhood unintentional injuries represent a major public health issue affecting the lives and health of children worldwide, with the risk of occurrence and injury types dynamically changi...
Background: Childhood unintentional injuries represent a major public health issue affecting the lives and health of children worldwide, with the risk of occurrence and injury types dynamically changing across different ages and developmental stages. To effectively prevent unintentional injuries, intervention measures must be adjusted based on actual circumstances and kept up to date in real time. Measures leveraging mobile health technologies can meet this dynamic requirement, but their specific implementation strategies and effectiveness in this field remain unclear at present. Objective: To conduct a scoping review of research on the application of mobile health technologies in the prevention and emergency of unintentional injuries among children, aiming to comprehensively understand the types of mobile health technologies used, intervention contents, and outcome measures, thereby providing a reference for related studies. Methods: Following the methodological framework of a scoping review, we systematically searched 5 databases—PubMed, Web of Science, Embase, Cochrane Library, and CINAHL—from their inception to December 31, 2025. Data extraction and synthesis were performed on the included studies. Results: Of the 1,085 articles, 22 (2%) met the inclusion criteria. Additionally, 1 reference was included during full-text reading. Therefore, a total of 23 studies were included in this research. The included studies comprised 14 randomized controlled trials (RCTs) (60.9%), 4 quasi-experimental studies (17.4%), 3 mixed-method studies (13%), 1 qualitative study (4.3%), and 1 descriptive study (4.3%). Interventions targeted children's caregivers, school-aged children, and school teachers. Mobile health technologies include Applications (Apps), Internet platforms, Social media, Virtual reality(VR), and serious games. Intervention contents included health education, interactive communication, monitoring and reminders, behavioral training, surveys and documentation, risk assessment, and emergency features. Outcome measures encompassed childhood unintentional injury rates, injury-related knowledge, attitudes, and behaviors (KAB), as well as other psychological and social indicators, and mobile health technology feasibility metrics. Conclusions: Mobile health technology-driven interventions can enhance caregivers' and children's awareness about unintentional injury safety while improving related attitudes and behavioral competencies. Future efforts should optimize functional design to strengthen users' willingness to continue using as well as behavioral adherence, expand the contents of health education about critical aspects such as post-incident emergency response, develop scientifically rigorous outcome assessment tools, and conduct high-quality, large-scale, long-term studies to evaluate the effectiveness in reducing childhood unintentional injury rates. Clinical Trial: OSF Registries osf.io/y4bhs; https://osf.io/8qpr6(DOI 10.17605/OSF.IO/3MXSA)
Background: The 21st Century Cures Act information blocking regulations led to many health care providers (HCPs) altering policies to electronically release test results to patients immediately upon t...
Background: The 21st Century Cures Act information blocking regulations led to many health care providers (HCPs) altering policies to electronically release test results to patients immediately upon their availability. Objective: To understand how often patients view results in the patient portal before hearing from their HCP, and whether they are given the option to decide how results are communicated. Methods: Using data from the 2024 Health Information National Trends Survey on U.S. adults who received recent test results via patient portal (N=6,045), we examined whether patients viewed test results before hearing from their HCP, were given the option to decide how test results were communicated, and understood results viewed before hearing from an HCP. Results: 70% of patients who received results viewed them in their patient portal, most of whom viewed results before hearing from their HCP (58% overall). 28% of patients and 33% of portal users reported being given the option to decide whether they wanted to receive test results before hearing from their HCP. Two-thirds of patients understood results they viewed in their patient portal before hearing from their HCP (66%). Conclusions: While most patients viewed results before discussing with their HCP, only one-third reported being given the option to decide how results would be communicated and two-thirds of patients who viewed immediately released results understood their implications. Clearly presenting the option to decide when test results are communicated and incorporating patient preferences in portal communications could help empower patients and mitigate potential worry.
Background: Generative artificial intelligence (GenAI) tools powered by large language models (LLMs) are increasingly used by the public to seek health information. Unlike traditional web search, GenA...
Background: Generative artificial intelligence (GenAI) tools powered by large language models (LLMs) are increasingly used by the public to seek health information. Unlike traditional web search, GenAI systems generate conversational answers, which may influence how users assess credibility, manage uncertainty, and decide whether to verify information or consult clinicians. Evidence is needed to clarify facilitators, barriers, and user practices in GenAI-supported health information seeking. Objective: This systematic review synthesizes empirical research on consumer and patient health information seeking with GenAI/LLM tools, focusing on study contexts, adoption and use outcomes, facilitators and barriers, with implications for clinician-patient interactions. Methods: We conducted a review following PRISMA-ScR. Records were identified through database searching and screened using predefined eligibility criteria. Included studies were extracted using a structured form capturing study characteristics, GenAI tool type, health context, outcomes, as well as facilitators and barriers. Findings were synthesized using structured grouping aligned to the RQs. Results: The review included 27 studies. GenAI was used for symptom appraisal, condition understanding, treatment options, and care navigation. Facilitators emphasized convenience and clarity, including efficiency and access (29.6%, n=8), comprehensibility and presentation quality (40.7%, n=11), personalization and specificity (18.5%, n=5), and affective or interpersonal comfort (18.5%, n=5). Barriers were dominated by credibility and trust concerns (48.1%, n=13), particularly when accuracy cues or citations were missing or difficult to interpret. Additional barriers included perceived unsuitability for complex, urgent, or emotionally charged situations (18.5%, n=5), privacy or data security concerns (14.8%, n=4), limited prompting skills (7.4%, n=2), and modality or interaction constraints that hindered credibility assessment and information comparison (18.5%, n=5). Literacy-related capability was reported in 22.2% of studies (n=6), and verification-supporting features (e.g., visible sourcing, transcripts, and save/revisit/share functions) were reported in 18.5% of studies (n=5). Conclusions: GenAI is used for diverse health information needs, but reliance is shaped by trust, perceived risk, and verification capacity. Future research should improve reporting of tools and prompting conditions, standardize measures of reliance and verification, and evaluate use in higher-stakes and underserved contexts to inform safer design and public guidance.
Background: Simulation-based medical education is essential for improving patient safety. In virtual reality (VR)–based simulation, immersion is primarily generated through visual and auditory cues,...
Background: Simulation-based medical education is essential for improving patient safety. In virtual reality (VR)–based simulation, immersion is primarily generated through visual and auditory cues, while other sensory modalities are typically absent. This sensory limitation may reduce the emergence of authentic safety-relevant behaviors.
Olfaction plays an important role in clinical reasoning, risk perception, and self-protective behavior and is closely linked to memory and emotion. Although olfactory cues have been shown to influence hand hygiene behavior in real or simulated-real environments, their targeted integration into fully immersive VR-based medical simulation has not been systematically examined. Objective: This study aimed to investigate whether adding a real olfactory cue (disinfectant scent) to a fully virtual clinical simulation increases patient safety–relevant behavior, specifically hand hygiene compliance (hand disinfection and glove usage). Methods: In a randomized controlled study at the University of Münster (winter term 2025/26), 89 medical students participated in a VR-based clinical simulation. Study rooms were pre-assigned to either an olfactory intervention or a control condition, and participants selected their room without knowledge of the assigned condition. Hand hygiene and glove use were automatically tracked as outcomes. Odds ratios were calculated to assess the effect of the intervention on these behaviors. Results: The olfactory intervention nearly tripled the odds of hand disinfection (OR = 2.81, 95% CI 1.09–7.75, P = 0.037), while no significant difference was observed for glove use (OR = 1.62, P = 0.278). Conclusions: The integration of a real olfactory cue into a fully immersive VR medical simulation significantly increased hand disinfection behavior, particularly after patient contact, but did not affect glove use. These findings suggest that olfactory augmentation can selectively reinforce safety-relevant behaviors in digital training environments. Incorporating real-world sensory cues into VR may represent a simple yet effective design strategy to enhance behavioral authenticity and patient safety outcomes in simulation-based medical education. Clinical Trial: German Clinical Trials Register: DRKS00039472
Background: Anxiety and depressive disorders remain highly prevalent and insufficiently treated, with many individuals experiencing persistent or untreated symptoms, limited access to evidence-based c...
Background: Anxiety and depressive disorders remain highly prevalent and insufficiently treated, with many individuals experiencing persistent or untreated symptoms, limited access to evidence-based care, or insufficient support between clinical encounters. Adults with disabilities represent a particularly underserved sub-population, often facing compounded barriers to mental health care and higher rates of anxiety and depression. Digital therapeutics offer a scalable opportunity to address these gaps by extending structured, evidence-based interventions beyond traditional care settings. Objective: The current pilot study evaluated Rauha™, a novel digital therapeutic that integrates cognitive behavioral therapy (CBT)-based modules with live weekly sessions led by a National Board-Certified Health and Wellness Coach (NBC-HWC), delivering structured, smartphone-based psychoeducation and interactive therapeutic exercises combined with personalized mental health coaching supporting behavior change. Methods: Thirteen adults with mobility and/or hearing disabilities and clinically elevated anxiety and/or depression were enrolled in a single-arm, within-subjects design. Participants completed eight weeks of CBT modules delivered via smartphone, accompanied by synchronous virtual mental health coaching. Anxiety and depression were assessed using the Hamilton Anxiety (HAM-A) and Hamilton Depression (HAM-D) Rating Scales, respectively, at baseline, post-treatment, and at four-week follow-up. Results: Mean reductions were significant for both anxiety (-13.05 ± 2.51, P < .001) and depression (-12.83 ± 1.55, P < .001), exceeding thresholds for clinical significance and sustained through follow-up. At post-treatment, 84.6% of participants showed clinically significant improvement in both anxiety and depression. At follow-up, 76.9% and 92.3% of participants showed clinically significant improvement in anxiety and depression, respectively. Between baseline and follow-up timepoints, these reductions corresponded to mean shifts from moderate to mild anxiety on the HAM-A and moderate to mild/non-depressed on the HAM-D. Participants reported strongly favorable acceptability, experience, and usability ratings for the Rauha™ treatment program, demonstrating 100% treatment retention and an average 5.5 replay rate of personalized smartphone content. Conclusions: Findings demonstrate that a combined digital CBT and NBC-HWC approach can yield clinically meaningful and durable symptom reductions in depression and anxiety, coupled with high user acceptability and engagement, for adults with disabilities. These findings provide preliminary evidence supporting Rauha™ as a scalable, evidence-informed mental health intervention with strong potential to improve access and address key barriers to care.
Background: In Canada, Black students continue to be underrepresented in medical schools and face institutional barriers, including limited access to the information necessary for their admission and...
Background: In Canada, Black students continue to be underrepresented in medical schools and face institutional barriers, including limited access to the information necessary for their admission and their academic path. The Black Medical Students Association of Canada (BMSAC) has developed a bilingual website for these students. Objective: The purpose of this research is to evaluate the quality, accessibility and usefulness of the site and make recommendations for its improvement. Methods: A cross-sectional survey was conducted through an online system using the System Usability Scale (SUS), a validated website user experience evaluation tool. Three open-ended questions were added to the survey to identify areas for improvement. The data from the SUS were analyzed using descriptive statistics and the answers to the questions underwent thematic analysis. Results: 50 participants responded to the survey (24 in English and 26 in French). The overall SUS score was 75.8. The SUS scores for the English and French versions were 77.0 and 74.7, respectively. More than three quarters of respondents lived in Quebec. Respondents learned more about the available resources and recommended including more images illustrating organized events on the site. Conclusions: The overall SUS score and that of English and French respondents were considered satisfactory. The lack of visual support, updated information and some technical problems seemingly explain these results. Strong Quebec representation also indicates the need to promote the site elsewhere in Canada.
Background: Enhancing telemedicine requires a clear understanding of how avatars influence medical collaboration. The ArtekMed study group developed a MR teleconsultation system that enables a remote...
Background: Enhancing telemedicine requires a clear understanding of how avatars influence medical collaboration. The ArtekMed study group developed a MR teleconsultation system that enables a remote expert (VR user) to interact in real-time with a local augmented reality (AR) user within a shared working space. The system was compared to a standard video call system in five randomized cross-over trials in a healthcare simulation center. Objective: This post-hoc study investigates user’s perceptions of a virtual character representing a remote expert across four real-time mixed-reality (MR) teleconsultation scenarios. Methods: A total of 56 medical professionals participated as AR users collaborating with a remote expert represented by a virtual character. A post-hoc qualitative analysis of structured post-session interviews was performed to explore participants’s perceptions of the avatar, focusing on perceived helpfulness, visual design and user engagement. Results: Overall, most participants did not perceive the avatar as helpful for task execution in procedural scenarios and frequently described it as unnecessary or even distracting. In contrast, in more complex and demanding scenarios, such as emergency craniotomy planning or intensive care treatment of patients with acute respiratory distress syndrome, some participants perceived the avatar as providing mentorship, guidance and psychological support. These findings suggests that while avatars may offer limited perceived value in task-focused medical collaboration, they may support user engagement in scenarios requiring sustained interaction and social presence. Conclusions: The results align with existing literature indicating that the impact of avatars is context dependent. In mixed-reality environments, where virtual character coexists with real-world reconstructions, avoiding behavioral incongruence and uncanny effects may be more critical than achieving high visual fidelity. Future research should prospectively explore how different levels of avatar abstraction and fidelity influence collaboration in MR telemedicine.
Background: Chronic kidney disease (CKD) is a progressive, multisystem condition associated with substantial morbidity and mortality worldwide. Patients with established CKD are particularly susceptib...
Background: Chronic kidney disease (CKD) is a progressive, multisystem condition associated with substantial morbidity and mortality worldwide. Patients with established CKD are particularly susceptible to acute clinical deterioration and frequently present to emergency departments with high-acuity conditions. Despite the increasing burden of CKD, real-world data describing emergency presentations and early management practices remain limited, especially in low- and middle-income countries. Objective: The primary objective of this study is to describe the spectrum of clinical emergencies among adults with established CKD presenting to the emergency department. Secondary objectives include documenting initial emergency management strategies and short-term hospital outcomes. Methods: This prospective observational study will be conducted over a two-year period in the emergency department of a tertiary care teaching hospital in central India. Adult patients with a documented diagnosis of CKD presenting with acute renal-related complications will be enrolled consecutively. Data on demographics, clinical presentation, investigations, emergency interventions, and in-hospital outcomes will be collected using a structured case record form. Descriptive statistical analyses will be performed, with exploratory regression analyses conducted where appropriate. Results: This manuscript describes the study protocol. Data collection and analysis will be completed after the study period. Conclusions: Systematic documentation of emergency presentations and early management of CKD-related complications may generate context-specific evidence to inform improvements in emergency preparedness and early clinical decision-making for patients with CKD. Clinical Trial: CTRI/2025/09/094214
Background: Intensive care clinicians rely on timely access to large volumes of electronic data to make complex decisions. The Central Adelaide Local Health Network (CALHN) implemented an electronic m...
Background: Intensive care clinicians rely on timely access to large volumes of electronic data to make complex decisions. The Central Adelaide Local Health Network (CALHN) implemented an electronic medical record (EMR) across its hospitals in South Australia, but the generic user interface is not optimised for critical care workflows. The CALHN Critical Care Informatics System (CCCIS) was developed as a prototype user interface (UI) to present ICU-relevant information in a more intuitive, task-focused format. Objective: This study aimed to evaluate the usability of CCCIS from the perspective of senior intensivists, and to identify key design principles for effective critical care informatics systems. Methods: We undertook a usability study with eight intensivists from CALHN. Participants interacted with a prototype version of CCCIS during a structured video-based session incorporating a Cognitive Walkthrough and Think Aloud approach. Sessions were screen-recorded and transcribed. Qualitative data were coded as positive, negative or neutral feedback and grouped into three domains: content, layout and visibility. Emergent themes were mapped across CCCIS components. Following the usability test, participants completed a System Usability Scale, NASA Task Load Index and a bespoke questionnaire assessing perceived usability, cognitive demand and clinical relevance. Reporting is aligned with the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines for interview-based research. Results: Participants reported that CCCIS supported rapid comprehension of patient information and facilitated integration between physiological data, interventions and clinical trajectory. The ability to customise views and to navigate between ward-level and bed-level information was highlighted as a strength. Areas for improvement included refinement of the ward board, ribbon and vital signs displays, particularly where duplicated information or visual clutter reduced clarity. Across the content, layout and visibility domains, recurrent themes included the importance of structured tabular displays, consistent visual hierarchies and explicit highlighting of clinically salient values. Survey responses suggested that CCCIS was easy to learn and use, exerted low cognitive demand, and was perceived as clinically relevant to everyday critical care practice. Conclusions: In this qualitative usability evaluation, intensivists perceived CCCIS as a usable and clinically meaningful critical care informatics system. The study identified design principles—such as structured presentation of data, alignment with mental models of ICU workflow and support for rapid synthesis of information—that may inform further development of CCCIS and other electronic medical record-integrated ICU interfaces.
Background: Understanding how digital systems can support clinical decision-making is crucial, especially with the growing deployment of increasingly complex artificial intelligence (AI) models. This...
Background: Understanding how digital systems can support clinical decision-making is crucial, especially with the growing deployment of increasingly complex artificial intelligence (AI) models. This complexity raises concerns about trustworthiness, impacting the safe and effective adoption of such technologies. In intensive care units (ICUs), where clinicians make high-stakes, time-sensitive decisions, decision-support tools must be designed to align with clinical needs and cognitive workflows. Improved understanding of decision-making processes and requirements for decision support tools is vital for providing effective solutions. Objective: This study aimed to investigate ICU clinicians’ decision-making processes, the challenges posed by patient complexity, and the requirements for decision-support systems to ensure transparent and trustworthy recommendations. Methods: We conducted group interviews with seven ICU clinicians, representing diverse roles and experience levels, to explore perspectives on decision-support tools. Reflexive thematic analysis was used to identify key themes and thereafter design recommendations. Results: Three core themes emerged from the analysis: (T1) ICU decision-making relies on a wide range of factors; (T2) patient complexity challenges shared decision-making, and (T3) acceptability and usability of decision support systems. Design recommendations derived from clinical input provide insights to inform future decision support systems for intensive care. Conclusions: Decision-support tools have the potential to enhance ICU decision-making, but their adoption depends on alignment with clinicians' needs and workflows. To improve trust and usability, future systems must be transparent in their recommendations, adapt to varying patient complexities, and facilitate, rather than replace, human expertise. Our findings inform the development of digital systems that are both transparent and trustworthy, aiding clinically acceptance in ICU settings. Clinical Trial: Not applicable.
Background: Despite significant improvements in cancer survival, late complications from oncological treatments remain inadequately managed in routine clinical practice. Standard oncology follow-up pr...
Background: Despite significant improvements in cancer survival, late complications from oncological treatments remain inadequately managed in routine clinical practice. Standard oncology follow-up prioritizes disease recurrence detection over systematic assessment of treatment-related sequelae, resulting in underdiagnosis of physical, psychological, metabolic, and social complications that substantially impair survivors' quality of life. The temporal dynamics of complication emergence remain poorly characterized, limiting development of evidence-based surveillance schedules.
Objective: This study aims to determine the time-specific incidence of 19 predefined treatment-related complications at 1, 6, 24, and 60 months following completion of first-line chemotherapy in adult cancer survivors who achieved complete response. Secondary objectives include characterizing temporal trajectories to identify critical surveillance windows, evaluating the feasibility and performance of a standardized guideline-based referral system integrated within a regional healthcare network, and identifying predictive factors for complication occurrence and timing. Objective: This study aims to determine the time-specific incidence of 19 predefined treatment-related complications at 1, 6, 24, and 60 months following completion of first-line chemotherapy in adult cancer survivors who achieved complete response. Secondary objectives include characterizing temporal trajectories to identify critical surveillance windows, evaluating the feasibility and performance of a standardized guideline-based referral system integrated within a regional healthcare network, and identifying predictive factors for complication occurrence and timing. Methods: PASCA-c is a single-center, prospective, interventional cohort study conducted at Centre Léon Bérard (Lyon, France). Over 36 months, 500 adults aged 18–65 years with complete responses to first-line therapy for lymphomas (Hodgkin/non-Hodgkin), acute myeloid leukemia, testicular germ cell tumors, non-metastatic breast cancer, or sarcomas will undergo systematic screening at 1 month (T1), 6 months (T2), 24 months (T3), and 60 months (T4) post-treatment. Each assessment includes validated questionnaires, biomarker analyses, cardiovascular evaluations, spirometry, and functional performance tests covering 19 complication domains. Clinical decision trees based on French and international guidelines generate standardized referral recommendations. Patients are referred to a regional network of 120+ healthcare professionals for complication management while continuing standard oncological follow-up. The primary outcome is the time-specific incidence of each complication at T1, T2, T3, and T4, distinguishing new-onset cases from persistent complications detected at prior assessments. Results: Patient enrollment began in September 2020 and is ongoing. Final results are anticipated in 2027 upon completion of 60-month follow-up assessments for the last enrolled participant. Conclusions: PASCA-c represents the first large-scale European study systematically evaluating the temporal dynamics of 19 treatment-related complications across multiple cancer types. By characterizing time-specific incidence patterns, this study will identify critical surveillance windows for each complication, informing the development of evidence-based, temporally-optimized survivorship care protocols that can be adapted to diverse healthcare settings. Clinical Trial: The study protocol (version_3_03-15-2022) was approved by the French ethics committee (Comité de protection des personnes Ile de France IV, ID-RCB: 2020-A01130-39). The study is registered on ClinicalTrials.gov (NCT04052126). All participants will be asked to sign and date an informed consent form. The results will be published in peer-reviewed journals and academic conferences. The study data have been declared to the ‘Commission nationale de l'informatique et des libertés’ via the reference methodology MR-001 n° R201-001-006.
Background: Kidney transplant recipients present reduced physical function and a high prevalence of cardiometabolic complications, which increase cardiovascular risk and compromise long-term graft out...
Background: Kidney transplant recipients present reduced physical function and a high prevalence of cardiometabolic complications, which increase cardiovascular risk and compromise long-term graft outcomes. Resistance training has demonstrated beneficial effects in this population; however, previous interventions have shown heterogeneity in load prescription and have not incorporated objective monitoring of movement velocity. Velocity-based resistance training allows precise regulation of exercise intensity and fatigue, potentially improving safety and individualization of exercise prescription in clinical populations. Objective: This study aims to evaluate the effects of a 12-week velocity-based resistance training program on renal function and metabolic health in kidney transplant recipients and to compare two different load control strategies based on movement velocity. Methods: This pilot randomized controlled trial includes adult kidney transplant recipients with stable graft function. Participants are randomly assigned (1:1) to either a maximal velocity group, in which sets are terminated at a 20% velocity loss threshold, or a constant submaximal velocity group performing repetitions at 50% of individual maximal velocity. Both groups complete three supervised sessions per week for 12 weeks with real-time velocity monitoring.Primary outcomes include renal and metabolic health domains assessed through venous blood analysis. Serum creatinine was predefined as the hierarchical primary renal endpoint, and high-density lipoprotein cholesterol (HDL) as the hierarchical primary metabolic endpoint. Estimated glomerular filtration rate (eGFR) will be calculated using the CKD-EPI equation. Secondary outcomes include blood pressure, body composition, muscular strength, metabolic syndrome criteria, and force–velocity profile. Data will be analyzed using analysis of covariance and linear mixed-effects models following a predefined hierarchical inferential strategy. Results: The intervention began in June 2025. At the time of manuscript submission, nine participants have completed the intervention (five in the maximal velocity group and four in the submaximal velocity group), and the remaining participants are currently undergoing the training program. Data collection and monitoring are ongoing, and final analyses are planned after completion of the intervention phase. Conclusions: The intervention began in June 2025. At the time of manuscript submission, nine participants have completed the intervention (five in the maximal velocity group and four in the submaximal velocity group), and the remaining participants are currently undergoing the training program. Data collection and monitoring are ongoing, and final analyses are planned after completion of the intervention phase. Clinical Trial: NCT07370727;
Background: Routine monitoring of patient-reported outcomes (PROs) during cancer treatment improves symptom control, quality of life, and in some cases even survival, yet real-world uptake and sustain...
Background: Routine monitoring of patient-reported outcomes (PROs) during cancer treatment improves symptom control, quality of life, and in some cases even survival, yet real-world uptake and sustained engagement with PRO monitoring remains suboptimal. One barrier is patients’ motivation to complete repeated assessments over extended periods of time. Gamification has been shown to improve engagement with digital health interventions, but its application to PRO monitoring has been minimally explored. User-centered design approaches are needed to ensure that gamified PRO tools are acceptable, usable, and responsive to patient and clinician needs, especially for older adult users. Objective: The study aimed to develop a gamified symptom monitoring web app for older adult cancer patients using a multi-stage, iterative, and user-centered approach. Methods: Phase 1 involved a survey of 216 older adults with chronic health conditions on mHealth and gamification preferences. Phase 2 involved interviews with seven older adult cancer survivors on the app concept and wireframes. Phase 3 involved an iterative process of interviews with five clinicians and usability tests with nine older adult cancer survivors to refine the web-app prototype for deployment. Results: In Phase 1, older adults with chronic conditions reported high overall familiarity with mobile technology and generally favorable attitudes towards gamification, with learning-oriented elements rated mostly appealing (mean = 4.3/5). Concerns for a gamified mHealth app included data privacy and perceived trivialization of health. Phase 2 interviews demonstrated strong interest in longitudinal symptom visualization and clinician-sharable reports; gamified travel-based learning content was viewed as engaging by most participants but was preferred as optional rather than mandatory. In Phase 3, iterative clinician interviews and patient usability testing led to substantial refinements, including simplified navigation, enhanced accessibility for older users, clearer score interpretation with embedded educational videos, and de-emphasis of the gamified travel learning component. Across usability testing rounds, the number of user experience problems decreased substantially, indicating improved usability of the final prototype. Conclusions: Using a multi-stage, mixed-methods, user-centered design process, we developed AthenaCompanion, a gamified web-based app for PRO monitoring tailored to older adults undergoing cancer treatment. Findings highlight the importance of emphasizing clinical utility, clarity of symptom feedback, and low-pressure, optional gamification elements. This work demonstrates the feasibility of integrating gamification into PRO monitoring and provides a foundation for future work evaluating long-term usability, engagement, and clinical effectiveness in real-world oncology care. Clinical Trial: N/A
Background: Given the rising incidence of skin cancer, efficient triaging of skin lesions is ever-more critical to ensure healthcare systems can cope. To this end, skin cancer mobile healthcare apps...
Background: Given the rising incidence of skin cancer, efficient triaging of skin lesions is ever-more critical to ensure healthcare systems can cope. To this end, skin cancer mobile healthcare apps can be leveraged, but it is essential to understand how patients’ trust of and adherence to the apps’ advice can be optimised such that they are neither underused nor overused such that unnecessary appointments are minimised without compromising detection. Objective: To test whether human-teledermatologist supervision in an AI-driven skin cancer app increases patient trust and adherence intentions, compared with AI-only advice, and how this is influenced by patient motivational context (curiosity or concern) and risk assessment level. Methods: Randomised controlled crossover online experiment. Conducted in May–June 2025 as a Single-centre study among Dutch adults via an academic hospital-based patient panel. Of the 2,707 patient panel members, 879 participated (response rate: 32.5%). Participants were aged 18–93 (mean age: 62.5), 50.4% female, 21% with prior skin cancer. Participants were randomly allocated to motivation conditions (concern vs curiosity). Participants completed four simulated mobile app trials varying in advice source (AI-only vs hybrid teledermatologist+AI) and skin cancer risk assessment (high vs low). After each scenario, trust and adherence intention were rated. Trust (0–100 scale) and intention to adhere to advice (recoded ordinal variable). Models included advice source, risk assessment, motivation, interactions, and participant random intercept. Results: Trust was significantly higher for hybrid risk assessments (mean=76.9, SE = 0.73) compared to AI-only (mean=65.68; p<.001; partial η² = .22). The odds of adherence were 1.7 times greater with hybrid versus AI-only (OR=1/0.59=1.7, 95% CI [1.28, 2.22], p<.001). High-risk assessments increased adherence (OR=2.9, 95% CI [2.2, 3.8], p<.001), with a moderated effect by motivation (p=.033), showing stronger adherence with concern versus curiosity. The effect of advice source on adherence was fully mediated by trust. Conclusions: Teledermatologist supervision in AI-driven skin cancer apps robustly increases patient trust and adherence intentions, especially for high-risk advice and concerned users. Integrating human supervision with AI supports maximising patient adherence to advice, and thus improving triaging efficiency. Clinical Trial: The study was preregistered on the Open Science Framework (https://osf.io/64pjn/) on 26th May 2025.
Background: Endometrial cancer incidence is rising globally, yet early detection remains hampered by the subjectivity and resource intensity of conventional diagnostics. There is a critical need for n...
Background: Endometrial cancer incidence is rising globally, yet early detection remains hampered by the subjectivity and resource intensity of conventional diagnostics. There is a critical need for non-invasive, interpretable tools that integrate structured clinical data with unstructured medical knowledge. Objective: To develop and validate an AI-Agent–Based Endometrial Cancer Prediction (AIECP) model designed to enhance risk assessment accuracy and clinical interpretability. Methods: A total of 3,959 patient records were collected, and twelve machine learning algorithms were evaluated. The top five performing models were integrated using weighted soft voting to enhance predictive accuracy. Semantic embeddings of medical dialogues were generated using Sentence-BERT and stored in a vector database to enable context-aware retrieval. A locally fine-tuned large language model was then employed to synthesize classification results and retrieved knowledge, providing interpretable diagnostic explanations. The performance of these models was evaluated using several metrics: accuracy, precision, recall, F1-score, ROC-AUC, and PR-AUC, supplemented by a usability and feasibility evaluation involving 100 patients suspected of endometrial cancer. Results: The soft voting ensemble achieved a PR-AUC of 0.898 and a ROC-AUC of 0.724, outperforming all individual models. Given the pronounced class imbalance in the dataset, PR-AUC was emphasized as the primary performance metric, as it provides a more clinically meaningful assessment for early cancer risk stratification. Sentence-BERT embeddings demonstrated superior performance compared to conventional embedding methods in document classification tasks, achieving F1-scores of 0.95. In the case study, the predictions generated by the AIECP model exhibited a high level of concordance with clinical diagnoses. Additionally, user feedback indicated a high satisfaction rate, with an average rating of 4.8 out of 5. Conclusions: By integrating ensemble learning, knowledge retrieval, and contextual reasoning, the AIECP model effectively bridges data-driven inference and clinical decision support, facilitating real-world clinical translation and future deployment.
Background: Delays in completing cancer screening diminish the preventive benefits of early detection, particularly among women receiving care in Federally Qualified Health Centers (FQHCs). Objective:...
Background: Delays in completing cancer screening diminish the preventive benefits of early detection, particularly among women receiving care in Federally Qualified Health Centers (FQHCs). Objective: This study examined factors associated with time to cancer screening completion among women aged 50 years or older who participated in an SMS text message outreach campaign, in a large 56-clinic FQHC network. Methods: Female patients aged 50 years or older who received at least three reminder SMS messages encouraging completion of any of the following overdue cancer screening tests: (a) mammogram, (b) FIT/Cologuard, or (c) HPV/Pap test.
The outcome variable was the time (in days) to the completion of the cancer screening test following the initial SMS reminder. The primary independent variable was the type of cancer screening test: mammogram (breast cancer screening), FIT or Cologuard (colorectal cancer screening), or HPV/Pap testing (cervical cancer screening). Other independent variables included sociodemographic characteristics, health status and access to care variables, and health-related social needs variables, including food insecurity, social isolation, housing insecurity, and transportation challenges.
A Cox proportional hazards model was applied to quantify the associations between time to screening completion and the independent variables and covariates. All analyses were performed using R version 4.4.2 in RStudio. Hazard ratios (HRs) and their 95% confidence intervals were obtained by exponentiating the model coefficients. Results: The median survival times (in days) for the overall cohort, HPV/Pap, Mammogram, FIT/Cologuard groups were 59.0 (95% CI: 53–65), 72.5 (95% CI: 64–86), 52.0 (95% CI: 43–64), and 52.0 (95% CI: 52–53), respectively. Compared with patients who were overdue for HPV/Pap screening, patients in the FIT group (HR = 1.65, 95% CI: 1.34–2.05, p < 0.001) and the mammogram group (HR = 1.41, 95%CI: 1.11–1.78, p < 0.001) had a significantly higher likelihood of completing screening sooner, reflecting shorter times to screening completion. Screening positive for transportation as a social need was associated with delayed screening completion (HR = 0.74, 95% CI: 0.55–0.99, p = 0.045). Conclusions: These findings indicate that transportation barriers are associated with longer time to cancer screening completion among women aged 50 years or older. In addition, the slower completion of HPV/Pap screening compared with FIT and mammogram suggests that cervical cancer screening may require more intensive follow-up, tailored outreach messages, or enhanced counseling to reduce delays in completion time.
Background: Prediabetes is highly prevalent and increasing globally, yet lifestyle interventions remain underutilized. Artificial intelligence (AI)-driven mobile health tools can help scale diabetes p...
Background: Prediabetes is highly prevalent and increasing globally, yet lifestyle interventions remain underutilized. Artificial intelligence (AI)-driven mobile health tools can help scale diabetes prevention efforts, but the key factors driving their success are not well understood. Objective: This prospective study aims to characterize the most valued features and the role of user engagement on outcomes in a fully automated mHealth intervention for diabetes prevention. Methods: Data from 151 participants with prediabetes and overweight or obesity assigned an AI-based Diabetes Prevention Program (Sweetch, Sweetch Ltd.) in a parent RCT (NCT05056376) were analyzed. Engagement (defined as total days where app was used) was categorized into tertiles (low, medium, high). Baseline characteristics were compared across engagement groups using ANOVA, Kruskal-Wallis, and chi-square tests, and regression models assessed the association between engagement and achievement of diabetes risk reduction outcomes (≥5% weight loss, ≥4% weight loss with ≥150 min/week of activity, or ≥0.2-point A1C reduction at 12 months). Perceived usefulness of intervention features was surveyed at 12 months. Results: At 12 months, median engagement was 98 days (IQR: 34–232), with most participants (75.5%) demonstrating a decreasing engagement trajectory over time. Older age (p < 0.001) and lower baseline BMI (p < 0.05) were significantly associated with higher engagement. High engagement was significantly associated with achieving the composite diabetes risk reduction outcome (OR: 2.59; 95% CI: 1.11–6.01), ≥5% weight loss (OR: 3.31; 95% CI: 1.16–9.42), and ≥0.2% A1C reduction (OR: 3.57; 95% CI: 1.19–10.75) compared to low engagement. The app features perceived most useful in achieving participant health goals were weight tracking, activity tracking, and the digital scale. Conclusions: Higher engagement with an AI-driven intervention requiring no human intervention was associated with improved diabetes risk reduction. Contrary to concerns about lower digital literacy, older adults engaged with the intervention the most. Features related to weight and physical activity tracking were most valued by patients in the program. Clinical Trial: ClinicalTrials.gov Identifier: NCT05056376
Background: The proliferation of artificial intelligence (AI) models in health has been accompanied by a fundamental but largely unexamined assumption: that the availability of health data and computa...
Background: The proliferation of artificial intelligence (AI) models in health has been accompanied by a fundamental but largely unexamined assumption: that the availability of health data and computational capacity is, in itself, sufficient justification for training new models. This assumption has driven an exponential increase in publications reporting AI applications in clinical and public health domains, yet the proportion of these models that demonstrably improve patient outcomes, clinical decision-making, or health system efficiency remains remarkably low. The field lacks a systematic pre-training evaluation framework that distinguishes between what is technically trainable and what is scientifically and socially necessary. Objective: This paper proposes a methodological framework for determining when training an AI model in health is scientifically justified and when it is not. We argue that the decision to train should be treated as a consequential resource allocation decision, subject to the same standards of justification applied to any health intervention, rather than as a neutral technical exercise. Methods: We conducted a conceptual and integrative review of the current AI-in-health literature, examining prevailing training practices, outcome reporting standards, and the gap between predictive performance and clinical impact. Drawing on frameworks from health technology assessment, implementation science, and philosophy of science, we synthesized a set of pre-training criteria organized around four dimensions: scientific necessity, clinical relevance, economic justification, and social accountability. Results: We present the TRAIN-H (Training Rationale Assessment for AI in Health) framework, a structured decision tool comprising seven core questions that must be affirmatively answered before model development is justified. The framework formalizes the principle that prediction without consequence is not impact, and that accuracy without altered conduct, workflow, or cost does not constitute a valid reason to build a model. We identify six explicit conditions under which training should not proceed and discuss implications for research ethics, peer review, funding allocation, and graduate education. Conclusions: Training AI in health is an investment of computational, human, institutional, and public resources. Like any health intervention, it requires a clear hypothesis, a defined population benefit, and a measurable endpoint. The absence of these elements does not merely weaken a study—it eliminates the justification for the model’s existence. We call for the adoption of pre-training justification standards across research institutions, funding bodies, and editorial boards.
Background: Large language models are increasingly deployed in mental health applications, yet growing evidence suggests they encode algorithmic biases that influence clinical outputs. Because these m...
Background: Large language models are increasingly deployed in mental health applications, yet growing evidence suggests they encode algorithmic biases that influence clinical outputs. Because these models now mediate patient-facing decisions, such biases carry the potential for direct harm. Whether they systematically affect psychiatric diagnosis across demographic groups remains underexplored. Objective: To examine whether large language models (LLMs) exhibit implicit demographic biases when generating psychiatric diagnoses. Methods: We developed 1,152 synthetic clinical vignettes using a matched-pair design that manipulated gender, race/ethnicity, age, socioeconomic status, English proficiency, and urbanicity while holding clinical content constant. Vignettes were divided into control (unambiguous anorexia nervosa) and ambiguous conditions designed to permit differential diagnosis. Ten LLM configurations across five model families were tested. Results: Control vignettes produced near-unanimous anorexia nervosa diagnoses (M = 100.0%), while ambiguous vignettes elicited greater variability (M = 23.6%). Inter-model agreement was moderate for ambiguous vignettes (Fleiss' κ = 0.410, 95% CI: 0.397–0.422). Mixed-effects logistic regression with LLM as a random intercept revealed significant demographic biases: Black patients were over six times more likely to receive a major depressive disorder diagnosis than White patients with identical presentations (OR = 6.09, 95% CI: 5.13–7.24), Latine patients were over nine times more likely (OR = 9.57, 95% CI: 8.00–11.45), and Asian patients were nearly three times more likely to receive an anorexia nervosa diagnosis (OR = 2.88, 95% CI: 2.44–3.42). Female patients were less likely than males to be diagnosed with anorexia nervosa (OR = 0.43, 95% CI: 0.37–0.49). Conclusions: These findings demonstrate that LLMs exhibit systematic demographic biases in psychiatric diagnosis even when clinical content is held constant, revealing measurable patterns that can inform improvements to training data, model architecture, and clinical deployment frameworks.
Background: Informed consent is a cornerstone of medical ethics, ensuring patients understand the risks, benefits, and alternatives of procedures before making healthcare decisions. However, challenge...
Background: Informed consent is a cornerstone of medical ethics, ensuring patients understand the risks, benefits, and alternatives of procedures before making healthcare decisions. However, challenges such as complex medical language, time constraints, and variable patient literacy often hinder comprehension. Recent advancements in artificial intelligence offer new opportunities to improve the informed consent process. Objective: This systematic review assesses AI’s effectiveness in enhancing patient understanding and decision-making. Methods: Following PRISMA guidelines, a comprehensive literature search was conducted in PubMed, Embase, and the Cochrane Library to identify studies published in the last five years on AI’s role in informed consent. Additionally, the reference lists of selected articles were manually reviewed to include any additional relevant studies. Descriptive and statistical analyses were conducted to evaluate AI’s effectiveness, along with tests for homogeneity to assess the feasibility of a meta-analysis. Results: A total of 33 studies met the inclusion criteria and were categorized by AI application: patient education, consent documentation, and direct AI-assisted consent acquisition. Overall, AI platforms provided accurate information, significantly enhancing patient comprehension across various specialties while also reducing anxiety and consultation times. However, concerns remained regarding AI’s lack of human empathy, potential inaccuracies, and ethical issues such as data privacy. Conclusions: AI has the potential to improve the informed consent process, but further research is needed to address ethical concerns and ensure its effective, patient-centered integration into clinical practice. Clinical Trial: The study protocol was registered on the International Prospective Register for Systematic Reviews (PROSPERO; #420250652460).
Background: Incidental detection of abdominal aortic aneurysms (AAAs) has increased with widespread imaging, while traditional surveillance workflows remain fragmented and clinician-dependent. We desc...
Background: Incidental detection of abdominal aortic aneurysms (AAAs) has increased with widespread imaging, while traditional surveillance workflows remain fragmented and clinician-dependent. We describe the implementation and system-wide performance of the System to Track Abnormalities of Importance Reliably (STAIR™), a centralized, artificial intelligence–assisted program designed to identify AAAs, assign guideline-based surveillance, and ensure longitudinal tracking within an integrated healthcare system. Objective: To evaluate the implementation and performance of a centralized, artificial intelligence–enabled surveillance program designed to identify, risk stratify and longitudinally track patients with abdominal aortic aneurysms across an integrated healthcare system. Methods: This descriptive cohort study included all patients enrolled in the STAIR™ AAA surveillance program following its implementation in December 2022. Case identification was performed using rule-based natural language processing of radiology reports, structured electronic health record queries, clinician referral, and automated lost-to-follow-up searches. All cases underwent centralized clinical review, with surveillance intervals assigned according to Society for Vascular Surgery guidelines. Patients were followed until a predefined administrative or clinical endpoint was reached. Outcomes were descriptive and included identification pathways, surveillance assignments, endpoint resolution, imaging utilization, and operative activity. Results: A total of 8,464 patients were enrolled. Identification occurred via problem list queries (59%), radiology natural language processing (29%), clinician referral (7%), and automated lost-to-follow-up searches (5%). Following centralized review, 3.7% required immediate imaging, 45.3% of patients were assigned biennial duplex surveillance, 9.5% were assigned five-year surveillance, and 20.6% were referred for vascular surgery evaluation. Prior AAA repair at enrollment was identified in 20.6% of patients. Among 4,718 patients who reached a definitive endpoint, all had documented final disposition, including transfer of care outside the health system (57.4%), no further follow-up required (13.2%), prior repair, death, patient refusal, or inability to establish contact. Duplex ultrasonography accounted for approximately 80% of surveillance imaging. Elective AAA repair volume averaged approximately 135 cases annually during the study period. Conclusions: In a large integrated healthcare system, a centralized, artificial intelligence–assisted surveillance infrastructure was operationally feasible and supported comprehensive identification, guideline-based surveillance assignment, and complete endpoint adjudication for patients with AAAs. These findings describe a scalable, workflow-focused approach to population-level AAA surveillance that is independent of care setting and emphasizes clinical oversight rather than autonomous decision-making. Clinical Trial: NA
Background: Machine learning methods succeed in stress detection under controlled laboratory conditions. However, transferring these models to real-world environments remains challenging. This perform...
Background: Machine learning methods succeed in stress detection under controlled laboratory conditions. However, transferring these models to real-world environments remains challenging. This performance gap is often considered as signal noise, overlooking fundamental issues in evaluation methodology and context-aware modeling. Objective: This work discusses the obstacles preventing the transition to real-world deployment and provides recommendations towards robust real-world stress detection methods. Methods: We synthesize current literature to map six critical challenges: high inter-subject physiological variability, motion/environmental artifacts, temporal signal misalignment, lack of contextual differentiation, biased ground truth labels, and inherent class imbalance in ambulatory data. Results: This perspective provides methodological recommendations for designing, evaluating, and reporting wearable stress detection studies, and strategies to avoid common experimental pitfalls, to ensure robust, trustworthy stress monitoring in real-world settings. Conclusions: : Reliable mHealth stress monitoring requires a shift from laboratory-based models to context-aware, subject-independent frameworks. By adopting the recommended evaluation and preprocessing standards, researchers can ensure that reported performance metrics reflect actual deployment reliability, improving the utility of wearable-based mental health interventions.
Background: The rapidly growing elderly population in Japan has increased demand for home care services. As a result, visiting nurses spend approximately 40% of their working time on documentation. Au...
Background: The rapidly growing elderly population in Japan has increased demand for home care services. As a result, visiting nurses spend approximately 40% of their working time on documentation. Automated documentation using large language models (LLMs) shows potential but faces hallucination risks and lack of patient-specific context. Although retrieval-augmented generation (RAG) has emerged to address these limitations through knowledge embedding, existing healthcare RAG systems focus on single-patient contexts and remain unexplored for Japanese clinical documentation. Objective: This study aims to develop and evaluate an Adaptive Cascaded RAG (AC-RAG) system that safely integrates cross-patient knowledge through four-stage hierarchical filtering and adaptive strategy selection for automating Japanese nursing documentation. Methods: We developed a four-stage cascaded retrieval pipeline with disease-gated filtering, demographic similarity scoring, adaptive semantic thresholds, and context volume control. The system selects optimal knowledge integration strategies (Hybrid, History-Only, Cross-Patient-Only, No-RAG) based on data availability. We evaluated 89 home nursing consultations across two Automatic Speech Recognition (ASR) systems, comparing AC-RAG against Few-Shot Generated Knowledge Prompting (FS-GKP). Results: The conservative extraction achieved 70.8% higher precision than FS-GKP. For RAG-based summary generation, semantic similarity improved 28% (P<.001, Cohen's d=1.69–1.84), TF-IDF cosine similarity increased 24% (P<.001), and character-level BLEU improved 47% (P<.001). Processing speed increased 89–91% with a 59–61% cost reduction. Ablation analysis demonstrated the hybrid strategy achieved the highest performance (cosine similarity: 0.266±0.038). Cross-patient-only showed lower performance than the no-RAG baseline (cosine similarity: 0.175 vs. 0.192, P=.40, d=0.27), suggesting cross-patient knowledge provides benefit when combined with patient history. Conclusions: AC-RAG demonstrates superior accuracy, semantic quality, and computational efficiency. The incremental benefit of cross-patient retrieval requires validation in larger samples. At $0.043–0.054 per consultation, the system demonstrates economic feasibility for deployment in Japanese home care settings. However, moderate entity recall (0.493–0.519) indicates the system is best suited for generating draft documentation requiring nurse review rather than fully autonomous operation.
Background: Clinicians exhibit considerable variability in diagnosing and managing thyroid nodules. While large language models (LLMs) show promise in processing medical data, their effectiveness and...
Background: Clinicians exhibit considerable variability in diagnosing and managing thyroid nodules. While large language models (LLMs) show promise in processing medical data, their effectiveness and reliability in standardizing the interpretation of thyroid nodule ultrasound text report have yet to be thoroughly validated. Objective: To assess two LLMs, DeepSeek-R1 and ChatGPT-4o, in interpreting thyroid nodule ultrasound text report, emphasizing the accuracy in benign-malignant differentiation, the agreement of Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) classification and management recommendation, and the stability of each task. Methods: We analyzed 1,063 ultrasound text reports from three medical centers, with 306 nodules confirmed by histopathology. Each nodule's report was processed through two LLMs using standardized prompts, repeated five times, with the final result determined by mode voting. Results: DeepSeek-R1 excelled over ChatGPT-4o in differentiating benign from malignant nodules, with superior sensitivity (0.879 vs. 0.692), accuracy (0.729 vs. 0.644), and Area Under the Curve (AUC) (0.694 vs. 0.632). However, senior radiologists achieved notably better results with higher accuracy (0.804), and AUC (0.865) compared two LLMs. In C-TIRADS classification, DeepSeek-R1 also outperformed ChatGPT-4o (κ=0.770 vs. κ=0.688, Δκ=0.083 [95% CI: 0.048, 0.122]). Both models showed substantial agreement with clinicians on management recommendation (κ=0.606 vs. κ=0.608, Δκ=-0.002 [95% CI: -0.044, 0.041]). In terms of stability, LLMs exhibited almost perfect agreement in C-TIRADS classification (α=0.864 vs. α=0.866, Δα=-0.003 [95% CI: -0.023, 0.017]) and management recommendation (κ=0.853 vs. κ=0.849, Δκ=0.004 [95% CI: -0.026, 0.033]). However, in benign-malignant discrimination, DeepSeek-R1 demonstrated significantly greater stability than ChatGPT-4o (κ=0.849 vs. κ=0.550, Δκ=0.260 [95% CI: 0.191, 0.321]). Conclusions: Our study highlights the potential of LLMs for interpreting thyroid nodule ultrasound text reports. DeepSeek-R1 outperformed in benign-malignant differentiation accuracy and classification consistency, whereas ChatGPT-4o and DeepSeek-R1 performed similarly in management recommendation.
Systematic literature reviews (SLRs) are critical for evidence synthesis and play a central role in supporting research, policy development, and evidence‑based decision‑making across healthcare an...
Systematic literature reviews (SLRs) are critical for evidence synthesis and play a central role in supporting research, policy development, and evidence‑based decision‑making across healthcare and related disciplines. However, traditional SLRs are resource-intensive and time-consuming, often requiring months of manual effort to screen thousands of records, extract data, and maintain methodological rigor. As global scientific output continues to grow exponentially, these operational challenges have intensified, contributing to longer completion timelines, greater workforce burden, and a heightened risk that reviews become outdated shortly after publication. In response, there is growing interest in artificial intelligence (AI) approaches to enhance the efficiency and scalability of SLRs.
AI tools now offer support for a broad range of review tasks, including assisting with search strategy development, identifying relevant concepts, prioritizing records for screening, and supporting data extraction and risk‑of‑bias assessments. AI can significantly accelerate labor-intensive stages, reduce human error during repetitive tasks, and enable the synthesis of evidence bases that might otherwise be impractical to review manually. However, these efficiencies must be balanced against the risks associated with AI, including bias, lack of transparency, variable outputs, and hallucinations (outputs that appear plausible but are factually incorrect).
A human-in-the-loop approach is therefore essential to validate AI outputs and maintain scientific integrity. Human expertise remains critical for defining research questions, validating search strategies, confirming study eligibility, interpreting nuanced data, and making final judgments on quality and risk of bias. Clear methodological guidance is required to support teams in integrating AI tools responsibly, transparently, and reproducibly into SLR workflows. Methodological considerations include selecting appropriate tools, defining oversight strategies, and applying performance metrics such as precision and recall.
This paper aims to provide methodological guidance on the effective integration of AI into each stage of the SLR process, drawing on both published literature and the authors’ real-world experience. We outline key considerations for selecting and implementing AI tools while maintaining human oversight. We also discuss how to maintain transparency, auditability, and alignment with established standards, including PRISMA‑P, PRISMA‑trAIce, and emerging guidance from regulators and health technology assessment bodies. We also present future directions for responsible AI use in SLRs.
AI should complement, not replace, human judgment. When implemented within a human-in-the-loop framework, AI has the potential to accelerate evidence synthesis, enabling faster, scalable, and rigorous reviews while preserving transparency and reproducibility.
Background: Pregnant women with ICD-10 affective or stress-related disorders face elevated risk for perinatal depression and anxiety, yet evidence on digital non-pharmacologic interventions for this p...
Background: Pregnant women with ICD-10 affective or stress-related disorders face elevated risk for perinatal depression and anxiety, yet evidence on digital non-pharmacologic interventions for this population remains limited. Objective: To evaluate the effectiveness of an 8-week digital mindfulness-based intervention (eMBI) compared with treatment as usual (TAU) among pregnant women with ICD-10 affective or stress-related disorders participating in a randomized clinical trial. Methods: This prespecified secondary analysis was conducted within a multicenter randomized controlled trial in Baden-Württemberg, Germany. Pregnant women aged ≥18 years with elevated depressive symptoms (Edinburgh Postnatal Depression Scale [EPDS] >9) and ICD-10–diagnosed affective or stress-related disorders were randomized 1:1 to eMBI or TAU. The intervention consisted of eight weekly app-based mindfulness sessions (45 minutes each) delivered during gestational weeks 29–36, with no direct therapist contact. Primary outcome was depressive symptom severity (EPDS) at 4–6 weeks postpartum. Secondary outcomes included EPDS at 6 months postpartum, generalized anxiety (STAI-S, STAI-T), and pregnancy-related anxiety (PRAQ-R). Analyses followed the intention-to-treat principle using mixed models for repeated measures and multiple imputation. Results: Of 5299 screened women, 147 met inclusion criteria for this subgroup analysis (intervention: n=73; control: n=74). Groups were comparable at baseline. The intervention group showed significantly greater reductions in EPDS scores at gestational week 34 (Δ=–2.21; P=.013), week 36 (Δ=–3.25; P=.013), and 4–6 weeks postpartum (Δ=–4.81; P=.007). Treatment effects remained robust under conservative missing-data assumptions. At 4–6 weeks postpartum, a higher proportion of participants in the intervention group achieved clinically meaningful improvement (42.5% vs 28.4%; adjusted odds ratio 1.56, 95% CI 1.19–2.05; P=.001). Anxiety outcomes followed a similar pattern, whereas pregnancy-related anxiety did not differ between groups. Conclusions: In this prespecified subgroup of pregnant women with ICD-10 affective or stress-related disorders, the use of a digital mindfulness intervention during pregnancy was associated with clinically meaningful reductions in depressive symptoms until 6 weeks postpartum. Even though effects at 6 months postpartum (T7) were smaller and statistically unstable across missing-data approaches, the digital mindfulness intervention effectively improved perinatal mental health in women with preexisting affective disorders, supporting its use as a safe, low-threshold alternative to pharmacological treatment during pregnancy and breastfeeding. Clinical Trial: DRKS00025697
Background: The medical black bag is synonymous with physicians, especially general practitioners who are expected to be ready to provide care across settings. The content of the devices they use will...
Background: The medical black bag is synonymous with physicians, especially general practitioners who are expected to be ready to provide care across settings. The content of the devices they use will likely expand due to the proliferation of digital tools. As portable diagnostics diversify, guidance is increasingly needed on which tools clinicians should choose and what this shift may mean for the physical examination and point-of-care assessment. Objective: The aim of this study is to map the current, the possible, and the future content of the medical black bag using anticipatory methods, and to provide a general, practice-oriented outline of how portable diagnostic technologies may evolve in primary care. Methods: National equipment lists and the World Health Organization’s MeDevIS database were compiled and filtered to define a contemporary reference set of reusable portable diagnostic instruments relevant to generalist practice. A one-year trend analysis using major professional and medical technology news sources were conducted to identify possible additions, screening for devices with diagnostic relevance, portability, digital capability, market presence, and evidence visibility. To extend the outlook to 2035, we performed a horizon scanning exercise using the same review period. These devices got grouped into thematic categories. Results: National equipment recommendations and World Health Organization lists yielded a stable core set of diagnostic tools used in routine primary care practice. Trend analysis and horizon scanning expanded this set by identifying possible and future additions of portable medical devices that can be used at the point of care. Overall, the identified technologies were increasingly digital, diverse, connected, and in some cases, AI supported, reflecting a trajectory toward more integrated and data-enabled diagnostics. Conclusions: The medical black bag is likely to evolve from a stable set of familiar instruments toward a broader toolbox of portable and connected diagnostic devices. While these tools may expand the scope of bedside assessment and enable more reproducible and shareable clinical signs, their value depends on appropriate validation, usability, workflow integration, training, and supportive financial and organizational conditions. Regular evidence-informed updates of equipment recommendations, alongside practical implementation support, may help primary care systems adopt useful innovations while preserving the human dimensions of clinical care.
Background: Large language models (LLMs) have expanded the use of generative AI in exercise prescription, but the quality and safety of these recommendations in real-world practice remain uncertain. O...
Background: Large language models (LLMs) have expanded the use of generative AI in exercise prescription, but the quality and safety of these recommendations in real-world practice remain uncertain. Objective: To compare resistance training prescriptions generated by ChatGPT (GPT-5.1) and Gemini (Flash 2.5) as evaluated by licensed physical education professionals. Methods: We conducted an analytical, quantitative, cross-sectional survey with 25 licensed professionals affiliated with the CREF20 council. Two 12-week resistance training programs for a 35-year-old woman with overweight were generated using a standardized prompt—one by each model—and then blindly labeled as Prescription A and Prescription B. Participants rated each prescription on a 5-point Likert scale across five dimensions (quality, clarity, relevance, safety, and usefulness). Data were analyzed in R using Shapiro–Wilk tests for normality and nonparametric comparisons (Wilcoxon and Mann–Whitney U tests). Prespecified subgroup analyses examined differences by age, professional experience, AI usage, and online coaching practices. Results: Across the 25 evaluators, no statistically significant differences were observed between ChatGPT and Gemini for the five rated dimensions (all P>.05). Gemini showed a non-significant trend toward higher perceived safety (P=.064; r≈0.37). Subgroup analyses by age, professional experience, AI usage, and online coaching practices likewise showed no significant differences between model outputs (all P>.05). Conclusions: ChatGPT and Gemini generated resistance training prescriptions perceived as moderately good and largely equivalent by licensed professionals. These findings suggest that LLMs may be useful as auxiliary tools for drafting training programs, but they do not yet demonstrate sufficient technical refinement to replace professional expertise, particularly regarding individualization, load progression, and systematic risk management.
Background: Individuals with infertility often experience substantial psychosocial distress. eHealth technologies have emerged as tools for delivering patient-centered care by addressing patients’ p...
Background: Individuals with infertility often experience substantial psychosocial distress. eHealth technologies have emerged as tools for delivering patient-centered care by addressing patients’ psychosocial needs. However, no systematic review has evaluated the overall impact of eHealth interventions on patient-reported outcomes and experiences among infertility patients. Objective: This study aimed to (1) examine the patient-centered care components incorporated in existing eHealth interventions for infertility care and (2) synthesize the most recent evidence regarding the effects of eHealth interventions on patient-reported outcomes and patient-reported experiences among infertility patients. Methods: A systematic review and meta-analysis were conducted by searching seven electronic databases: MEDLINE, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science, PsycInfo, and KoreaMed. The final search was performed in August 2025. Eligible studies were randomized controlled trials that assessed the impact of eHealth interventions on infertility patients, measured patient-reported outcomes and/or patient-reported experiences, and were published in English or Korean. Patient-centered care components were identified using a conceptual framework. Risk of bias was evaluated using the Cochrane risk-of-bias tool for randomized trials (RoB 2). Using a random-effects model, meta-analyses were conducted and reported as Hedges’ g with 95% confidence intervals (CIs). Subgroup analyses were performed to explore potential sources of heterogeneity. Assessments were conducted for publication bias, sensitivity analyses, and the certainty of evidence. Results: Twenty-five studies published between 2001 and 2025 were included. eHealth technologies included internet-based websites, mobile applications, real-time interactive platforms, self-monitoring devices, virtual reality devices, videos, and telephones. The most common patient-centered care components were emotional support and quality improvement to enhance access to care. eHealth interventions had statistically significant effects on infertility patients’ patient-reported outcomes at post-intervention (g = 0.428, 95% confidence interval [CI]: 0.179–0.676, k = 18, n = 2,202) and at follow-up (g = 0.824, 95% CI: 0.024–1.623, k = 5, n = 448), with low certainty of evidence. However, no statistically significant effect was observed for patient-reported experiences (g = 0.094, 95% CI: −0.097 to 0.284, k = 9, n = 890). Subgroup analyses indicated that post-intervention patient-reported outcomes differed by outcome type, gender, and intervention type. Conclusions: eHealth interventions may support patient-centered infertility care by improving patient-reported outcomes. To date, eHealth interventions have addressed patients’ psychological needs and reduced the need for frequent clinic visits. To advance patient-centered infertility care, future studies should develop gender-specific eHealth interventions tailored to men with infertility or couples. Clinical Trial: PROSPERO (CRD42021290277)
Background: Machine-learning-enabled prognostic models are increasingly proposed to support surgical decision-making for degenerative lumbar disorders, yet their clinical adoption remains limited. Und...
Background: Machine-learning-enabled prognostic models are increasingly proposed to support surgical decision-making for degenerative lumbar disorders, yet their clinical adoption remains limited. Understanding how surgeons perceive these tools is critical for effective implementation, particularly given the high-stakes nature of spine surgery where decisions carry long-term functional consequences and medico-legal implications. Objective: This study aimed to explore how consultant-level spine surgeons perceive machine-learning-based prognostic tools, including their trustworthiness, clinical utility, usability, workflow integration, and implications for patient counselling and shared decision-making. Methods: A qualitative study using semi-structured one-to-one interviews was conducted with 11 consultant-level orthopaedic and neurosurgeons practising in Singapore (response rate: 73% of 15 invited). Participants had a mean of 10.9 years (range: 8-25 years) of spine surgery experience; 82% (n=9) were male. Interviews (range: 15-57 minutes) were transcribed verbatim and analysed using Braun and Clarke's six-step reflexive thematic analysis. Data collection continued until information power was achieved, with three additional interviews completed after thematic sufficiency was reached as participants had already consented. The study was designed and reported in accordance with COREQ criteria. Results: Three overarching themes were developed. Trust contingent on data integrity revealed that surgeons' confidence depended fundamentally on data quality, local representativeness, labelling credibility, and rigorous validation, with participants consistently emphasising population representativeness as a trust prerequisite. Surgeons operationalised trust through accessible performance metrics, with most identifying AUROC as their preferred credibility heuristic. Pragmatic orientation as important for implementation demonstrated that usability and seamless electronic health record integration were non-negotiable prerequisites, with surgeons explicitly stating that manual data entry would preclude adoption. Medico-legal concerns were prominent, with participants emphasising that decision responsibility remains with the clinician. Surgical decision-making as a delicate dance between art and science reflected how prognostic outputs were positioned as adjunctive inputs to be reconciled with experiential judgement and patient heterogeneity. Surgeons emphasised that identical prognostic information would be interpreted differently based on practice philosophy, and most highlighted the inherent divergence between technical success and patient-perceived success, underscoring prognostic tools' value for expectation management rather than deterministic prediction. Conclusions: This first qualitative study of spine surgeons' perceptions reveals that adoption of machine-learning-based prognostic tools is contingent on data integrity, pragmatic workflow integration, and alignment with professional judgement and not predictive performance alone. Surgeons expressed cautious openness, viewing these tools as valuable in complex cases for clarifying outcome expectations without displacing clinical responsibility. Meaningful implementation requires robust data governance, contextually grounded validation, seamless electronic integration, and explicit positioning of machine learning as supportive rather than substitutive of surgical judgement. These findings provide empirically grounded guidance for developing clinically acceptable and implementation-ready prognostic decision support systems.
Background: Veterans face stigma, privacy concerns, and access barriers to HIV screening. For studies that use at-home HIV self-testing (HIVST) kits distributed through vending machines (VMs), recruit...
Background: Veterans face stigma, privacy concerns, and access barriers to HIV screening. For studies that use at-home HIV self-testing (HIVST) kits distributed through vending machines (VMs), recruitment and educational materials must communicate study purpose and participation options clearly, minimize confusion and stigma, and provide actionable next steps for participants who test outside of clinical settings. Objective: To elicit Veteran Advocate feedback on recruitment flyers, education handouts, a web-based questionnaire, and a qualitative interview guide for a Veterans Health Administration study evaluating impacts of VM-dispensed HIVST kits to Veterans and to document how feedback informed revisions to these study materials. Methods: Using participatory action research, we recruited Veteran Advocates with lived/living expertise of HIV (August 2025). Veteran Advocates completed structured written reviews of study materials and returned written feedback forms; feedback was also discussed during a 1-hour virtual focus group in September 2025. We analyzed written feedback and the focus group transcript using a rapid, team-based consensus thematic approach. Two study team members independently reviewed each feedback source and documented key recommendations and candidate themes using analytic notes; the team then met to cluster feedback into themes and reach consensus on final themes and definitions. To ensure findings directly informed materials improvement, we created a revision matrix mapping each theme to the relevant study material(s), a summary of feedback, and the resulting changes made. This matrix served as an audit trail linking feedback to the “feedback and revisions” tables presented in the Results. Results: Four Veteran Advocates provided structured written feedback on study materials, and three participated in the 1-hour focus group. Across study materials, Veteran Advocates desired (1) clearer, plain-language descriptions of study purpose, eligibility, and participation pathways; (2) reduced potential for confusion between research recruitment, VM access, and HIVST kit promotion; (3) reduced text density and participant burden; and (4) more actionable “next steps,” including human support and linkage-to-care resources appropriate for at-home self-testing. Revisions included a streamlined recruitment flyer with simplified calls-to-action and clearer survey versus interview pathways; a more cohesive and condensed education packet oriented around self-testing steps, results interpretation, and support resources; questionnaire updates to reduce redundancy and improve usability; and an interview guide with improved flow, more participant-centered framing, and optional questions on emotional reactions and support needs. Conclusions: Veteran Advocate feedback was systematically translated into concrete revisions across multiple study materials prior to study launch. Transparently mapping stakeholder input to specific adaptations may strengthen acceptability and usability of Veteran-facing HIV screening and self-testing materials in VA and similar settings.
Background: Preschool-aged children (2-5 years) living in households experiencing food insecurity (FI) are at a higher risk of facing health and behavioral issues as well as consuming lower quality di...
Background: Preschool-aged children (2-5 years) living in households experiencing food insecurity (FI) are at a higher risk of facing health and behavioral issues as well as consuming lower quality diets with fewer fruits and vegetables (FV). Repeated exposures are necessary for children to accept certain commonly rejected/disliked foods, but parents in households experiencing FI may not purchase foods their child does not readily accept. Project V.E.G.G.I.E. (Vegetable Eating Gets Going by Increasing Exposure) aims to address this issue by providing families with FVs at no cost alongside education on evidence-based parent-feeding practices tailored to the needs of preschool-aged children. Objective: This study describes the protocol for the Project V.E.G.G.I.E. pilot/feasibility study. Methods: The Project V.E.G.G.I.E. study includes 20 dyads: parents and their preschool-age children. Families received six boxes of fresh FV biweekly for 10 weeks alongside educational materials on parent feeding practices. Parent surveys and daily diaries were completed at three time points: baseline, post-test (weeks 10-11) and follow-up (4 weeks after intervention). Participants in the intervention group also completed an exit survey to assess the acceptability and utility of the FV boxes across several domains including: quantity, quality, and variety. Standard effect size estimates (Cohen’s d) will be calculated as baseline to post-test and baseline to follow-up analyses. Lastly, both control and intervention participants were invited to complete a qualitative interview to discuss satisfaction with the program. Results: The Project V.E.G.G.I.E. program was funded internally by the Department of BLINDED FOR PEER REVIEW at BLINDED University. Recruitment began in February 2025, and data collection took place from March – July 2025. Data cleaning is underway at the time of submission (February 2026); we expect to submit the outcome/feasibility manuscript in Spring 2026. Conclusions: The results of the Project V.E.G.G.I.E. study will be used to evaluate the feasibility and acceptability of this pilot intervention. Feedback will inform refinement of the Project V.E.G.G.I.E. intervention and study protocol prior to scaling up to a well-powered cluster randomized controlled trial. Implementing programs like Project V.E.G.G.I.E. that provide families with increased access to FV at childcare settings may overcome barriers typically associated with FV initiatives. Providing families with both increased access to FV and education on parent-feeding practices may be more effective at increasing FV consumption among preschoolers than either approach implemented independently.
Background: Frontline workers are frequently exposed to traumatic and high-stress experiences through their occupational roles. While the psychological impacts for frontline workers themselves are wel...
Background: Frontline workers are frequently exposed to traumatic and high-stress experiences through their occupational roles. While the psychological impacts for frontline workers themselves are well documented, far less attention has been given to the indirect effects of occupational trauma on their family members. Emerging evidence suggests that trauma exposure may extend beyond the individual worker, influencing the emotional wellbeing, relationships, and functioning of partners, children, and wider family systems. However, this literature remains fragmented across disciplines, occupational groups, and sociocultural contexts. Objective: The aim of this scoping review is to identify, map, and synthesise existing research on the impact of frontline worker trauma on family members. The review will examine how reported impacts vary across frontline occupations and sociocultural contexts, and will identify key gaps to inform future research, policy, and practice. Methods: This scoping review will be conducted in accordance with the PRISMA Extension for Scoping Reviews (PRISMA-ScR) and Joanna Briggs Institute (JBI) methodological guidance. A comprehensive search will be undertaken in PubMed (MEDLINE), PsycINFO, CINAHL, Web of Science, and Scopus. Eligible studies will include qualitative, quantitative, and mixed-methods primary research published in English, with no time constrains. Title and abstract screening, followed by full-text screening, will be conducted independently by at least two reviewers, with discrepancies resolved through discussion or consultation with a third reviewer. Data will be charted using a structured extraction framework and synthesised narratively, with findings presented using tables and visual evidence mapping. Results: This review will produce an evidence map describing the range of psychological, relational, and social impacts of frontline worker trauma on family members, the populations and occupational groups studied, and the sociocultural contexts represented. Gaps in the literature will be identified to guide future research priorities. Conclusions: By consolidating and mapping the existing evidence, this scoping review will contribute to an emerging but under-researched field and support the development of family-inclusive policies, services, and interventions for frontline worker populations.
Background: Electronic health record (EHR) data is being increasingly used for retrospective observational research through large, robust databases and advanced data extraction tools. Objective: We so...
Background: Electronic health record (EHR) data is being increasingly used for retrospective observational research through large, robust databases and advanced data extraction tools. Objective: We sought to assess the accuracy of vital sign, ventilator, and continuous medication data captured in the EHR in a pediatric intensive care unit (PICU). Methods: We conducted a retrospective observational study of children receiving invasive mechanical ventilation in June 2025. Data sources included 1) A bedside clinical researcher, 2) Automated EHR extraction, and 3) A continuous vital sign monitoring system. Vital sign comparisons used the continuous vital sign monitoring system as the gold standard. Ventilator and medication data comparisons used bedside observations as the gold standard. Differences were measured as Means with standard deviation (SD) or Median Differences (MD) with interquartile ranges (IQR). Results: We obtained 110 bedside observations from 27 unique patients. All measured vital signs in the EHR were accurate relative to the continuous vital sign monitoring system with mean differences ranging from a low of 0.1% for oxygen saturation to a high of 1.6 breaths per minute for respiratory rate. Most vital signs did have rare outliers such as a diastolic blood pressure difference of 46mmHg, a heart rate difference of 35 beats per minute, and a respiratory rate difference of 18 breaths per minute. Ventilator settings were highly accurate in the EHR with MD of 0.0 and IQR of 0-0. Outliers were less common but included a PEEP difference of 10mmHg, a respiratory rate of 4 breaths per minute, and an FiO2 of 15%. Continuous medication dosing accuracy was variable with an overall low accuracy between 28.0-35.2%. Conclusions: EHR data capture in the PICU is accurate for vital signs and ventilator settings, but less accurate for continuous medications.
Background: Inflammatory Bowel Disease (IBD) is a chronic nonspecific intestinal inflammatory condition; accurate severity assessment is critical for clinical treatment decisions and prognosis. Curren...
Background: Inflammatory Bowel Disease (IBD) is a chronic nonspecific intestinal inflammatory condition; accurate severity assessment is critical for clinical treatment decisions and prognosis. Current IBD evaluation relies primarily on endoscopic examinations and physician expertise, which are subjective and inconsistent. Objective: This study aimed to develop an automated deep learning-based scoring algorithm for objective quantitative assessment of IBD lesion severity to address the clinical challenge of subjective evaluation. Methods: A multi-stage deep learning architecture was employed for automatic IBD scoring: (1) an improved YOLO-V11 segmentation network (enhanced by attention mechanisms and multi-scale feature fusion) precisely identified edema and ulcer regions in intestinal endoscopic images; (2) a classification module based on YOLO-V11-derived lesion features recognized stenosis; (3) an LSTM lightweight normalization network integrated spatial and temporal lesion features to generate comprehensive IBD scores. Validation used endoscopic video data from 814 patients at a large medical center: 725 cases (4,400 annotated images) for edema/ulcer/stenosis recognition, and 89 cases for comprehensive scoring. Results: The model’s automatic scoring results showed a mean squared error (MSE) of 14.693 and a coefficient of determination (R²) of 0.82 compared with expert scoring. Key innovations included: (1) first combination of lesion recognition algorithms with image distribution frequency features for a multi-dimensional evaluation system; (2) development of a lightweight lesion recognition network suitable for clinical settings; (3) establishment of a large-scale annotated dataset encompassing various IBD subtypes. Conclusions: This automated scoring system improves the objectivity and repeatability of IBD severity assessment, providing a reliable tool for telemedicine and clinical trials. Future research will optimize the model’s performance in pediatric IBD and small bowel lesions.
Background: Gender disparities in disease burden remain a critical public health concern, particularly in low- and middle-income countries like Pakistan. Existing studies that have explored such inequ...
Background: Gender disparities in disease burden remain a critical public health concern, particularly in low- and middle-income countries like Pakistan. Existing studies that have explored such inequities in Pakistan have categorized health outcomes only at the broad Level 1 classification, including communicable diseases, NCDs, and injuries, without gender specific data. Objective: This study aimed to compare gender-based differences in mortality and disability-adjusted life years for the causes and risk factors in Pakistan in 2023, using data from the Global Burden of Disease. Methods: We conducted an ecological study using the Global Burden of Disease dataset for Pakistan, aged ≥20 years. We ranked the gender-aggregated and gender-disaggregated top causes based on mortality and disability-adjusted life years in Pakistan in the year 2023. Additionally, we calculated the absolute difference in cause-specific mortality and DALY rates between females and males. We ranked the risk factors for gender-aggregated and gender-disaggregated data in Pakistan in the year 2023. Results: In 2023, ischemic heart disease (IHD) (136; 95% UI: 170.4–105.1) and stroke (80.8; 95% UI: 113.7–57.9) were the leading causes of mortality among adults aged 20 years and above, as well as among males and females in Pakistan. The leading causes of DALYs were also IHD (3727.6; 95% UI: 2877.3–4687.7) and stroke (2175.1; 95% UI: 1598.2–3004.7), among males and females. Males experienced higher DALY losses from tuberculosis (2090.8; 95% UI: 1326–2971.5), road injuries (1706.7; 95% UI: 977.6–2388.1), and self-harm (864.1; 95% UI: 527–1273.6), while females were more affected by low back pain (1554.7; 95% UI: 1079.8–2126.1), depressive disorders (1538.5; 95% UI: 1042.4–2197.4), and dietary iron deficiency (1043.7; 95% UI: 461.9–1863.5). The greatest absolute difference for mortality and DALYs among males was reported for tuberculosis, while for females, rheumatic heart disease was reported for mortality, and lower back pain for DALYs. The leading risk factors for both gender-aggregated and gender-disaggregated mortality were diets low in nuts and seeds and particulate matter pollution for DALYs. Conclusions: Our findings show IHD and stroke were the leading causes of mortality and DALYs among adults aged 20 years and above in Pakistan in 2023, reflecting the continued dominance of non-communicable diseases. This highlights the importance of gender-disaggregated analysis in national health reporting. Tailored interventions addressing these disparities are crucial for equitable healthcare planning in Pakistan.
Background: Expenditures for physiotherapy (PT) and extended outpatient physiotherapy (EAP) are increasing within Germany’s statutory accident insurance system (Berufsgenossenschaften, BGs), placing...
Background: Expenditures for physiotherapy (PT) and extended outpatient physiotherapy (EAP) are increasing within Germany’s statutory accident insurance system (Berufsgenossenschaften, BGs), placing growing pressure on rehabilitation capacity and timely access to care. Digital health applications (DiGAs) are reimbursable nationwide and represent a novel component of routine rehabilitation pathways. However, their real-world system-level and economic effects in occupational rehabilitation remain insufficiently understood. Objective: This study aimed to evaluate how integration of DiGAs into occupational rehabilitation pathways may influence costs, service capacity, and waiting times within routine care. Methods: Aggregated administrative data from five BGs covering 25.9 million insured individuals (2023–2024) were analyzed using a multi-level simulation framework. The framework combined (1) probabilistic cost–consequence modeling with Monte Carlo simulation (10,000 iterations), (2) an adherence-based adoption funnel distinguishing long-term and short-term DiGA engagement, and (3) a calibrated M/M/1 queuing model validated through discrete-event simulation to estimate effects on waiting times and system capacity. Primary outcomes included net financial impact, break-even thresholds, and changes in access-related performance metrics. Results: Combined PT/EAP expenditures reached €404 million in 2024, increasing by 10.1% year over year. Simulation results indicated mean annual net savings of €18.4 million with a 90.7% probability of cost savings. After incorporating adherence dynamics, projected mean net savings were €16.2 million (95% CI €5.0–29.8 million), corresponding to a 100% probability of positive financial impact. Cost neutrality was maintained for DiGA prices up to €617.80 per prescription. Queuing analyses demonstrated that modest reductions in therapeutic demand could decrease mean waiting times from 17.3 to 12.8 days (−26%), equivalent to approximately 120,000 cumulative patient waiting days saved annually. Conclusions: Under conservative assumptions, integrating digital therapeutics into occupational rehabilitation pathways is likely to generate both economic benefits and substantial system-level capacity gains. Beyond cost effects, DiGAs may function as scalable implementation tools that alleviate bottlenecks and improve timely access to rehabilitation services in capacity-constrained health systems.
Background: Diabetes mellitus affects approximately 537 million adults globally, with projections indicating an increase to 643 million by 2030. Mobile health applications (mHealth apps) offer promisi...
Background: Diabetes mellitus affects approximately 537 million adults globally, with projections indicating an increase to 643 million by 2030. Mobile health applications (mHealth apps) offer promising support for diabetes self-management, yet adoption rates remain low. Understanding the factors influencing patients' intentions to use mHealth apps is essential for designing effective interventions. Objective: To develop and empirically validate an extended Unified Theory of Acceptance and Use of Technology (UTAUT) model incorporating personal innovativeness and attitude to explain behavioral intention to use mHealth apps for diabetes management. Methods: A cross-sectional survey was conducted with 485 Chinese adults. The measurement and structural models were assessed using Partial Least Squares Structural Equation Modeling (PLS_SEM). Results: Performance expectancy (β = .110, t = 3.401, P < .001), effort expectancy (β = .226, t = 5.942, P < .001), social influence (β = .112, t = 2.953, P =.002), facilitating conditions (β= .095, t = 2.476, P =.007), and personal innovativeness (β = .365, t = 9.280, P < .001) significantly influenced attitudes toward mHealth apps. Performance expectancy (β = .069, t = 2.239, P =.01), effort expectancy (β = .377, t = 8.939, P < .001), social influence (β = .123, t = 3.279, P < .001), and personal innovativeness (β = .116, t = 3.459, P < .001) significantly affected behavioral intention, while facilitating conditions did not (β = .041, t = 1.418, P =.07). Attitude significantly influenced behavioral intention (β = .337, t = 8.010, P < .001). Additionally, attitude significantly and positively mediated the relationships between performance expectancy (β = .037, t = 3.128, P < .001), effort expectancy (β = 0.076, t = 4.568, P < .001), social influence (β = .038, t = 2.775, P =.003), facilitating conditions (β = .032, t = 2.433, P =.007), and personal innovativeness (β = .123, t = 5.787, P < .001) and the behavioral intention to use mHealth apps for diabetes management. The model explained 31.7% of the variance in attitude and 51.5% in behavioral intention. Conclusions: The extended UTAUT model effectively explains mHealth app adoption for diabetes management by integrating personal innovativeness and attitude. Emphasizing app utility, usability, social influence, and fostering positive attitudes can enhance adoption. These insights inform healthcare providers and developers aiming to increase mHealth engagement among patients with diabetes.
Background: Machine learning has been demonstrated to enhance healthcare cost prediction by handling high-dimensional data and identifying complex patterns. However, current risk adjustment models rar...
Background: Machine learning has been demonstrated to enhance healthcare cost prediction by handling high-dimensional data and identifying complex patterns. However, current risk adjustment models rarely incorporate structured nursing information derived from the nursing process. This information captures care needs and human responses to health problems. C Objective: urrent study aimed to evaluate the impact of integrating nursing process data into machine learning-based predictive models of individual healthcare costs, including cost component analyses, compared with models based solely on sociodemographic, clinical, and morbidity-related variables. Methods: A retrospective observational study was conducted using a population-based cohort of 1,691,075 individuals aged ≥ 15 years who were registered with the Canary Islands Health Service. Predictors were derived from data available up to 2017 and included sociodemographic and clinical variables, morbidity adjustment (AMG), healthcare utilization, and structured nursing records (functional health patterns, NANDA, NOC and NIC). Predictive models were developed using feedforward neural networks and XGBoost; predictions were combined using an ensemble approach. An autoencoder was applied as a dimensionality-reduction technique for the nursing variables. Model performance with and without nursing variables was compared on total cost and individual cost components, and the coefficient of determination (R²) was used on the test set. Results: Including the nursing methodology yielded modest but consistent improvements in predictive performance. With respect to total cost, the best-performing model improved from R²=0.5023 to R²=0.5058 when the nursing variables were added. In the component-level analyses, performance gains were observed in hospital care (R²=0.3829) and pharmaceutical costs (R²=0.6631). Reducing 789 nursing variables to 16 latent dimensions using an autoencoder substantially simplified the feature space while retaining comparable predictive performance. Conclusions: Integrating structured information from the nursing process adds incremental value to machine learning-based predictive models and complements commonly used sociodemographic, clinical, and morbidity variables. The systematic incorporation of nursing data into predictive tools may contribute to more accurate healthcare cost prediction and support more holistic, person-centered approaches. Clinical Trial: Not Applicable
Background: Mindfulness has potential to improve lives after stroke but survivors experience barriers (e.g. transportation) to attend face-to-face programs. Only two virtual mindfulness programs have...
Background: Mindfulness has potential to improve lives after stroke but survivors experience barriers (e.g. transportation) to attend face-to-face programs. Only two virtual mindfulness programs have been explored for stroke survivors, but they included the diagnosis of traumatic brain injury and only persons with high levels of chronic fatigue, not representative of the general population of persons with stroke. Objective: The aims of this study were to: 1) investigate the effect of a virtual mindfulness program for stroke survivors on the primary outcomes of acceptance, stress and self-compassion and secondary outcomes including fatigue, depression, anxiety, and sleep; 2) explore the stroke survivor experience to better understand the effectiveness of the mindfulness program. Methods: This was a mixed methods study involving eight stroke survivors with a mean age of 55.3 years (range 41-66) and mean time post stroke of 48.6 months (range 7 to 94). Primary outcomes measured before (PRE), after the program (POST), and two months later (POST2) included the Illness Cognition Questionnaire (ICQ), the Acceptance of Illness Questionnaire (AIQ), the Perceived Stress Scale (PSS), and the Self-compassion Scale (SCS). Secondary outcomes included Frieberg Mindfulness Inventory (FMI), Mental Fatigue Scale (MFS), Patient-Reported Outcomes Measurement Information System (PROMIS®)-Short Form (depression, anxiety, fatigue, sleep disturbance). A paired t-test was conducted to compare PRE, POST and POST2 outcomes. Qualitative data was collected via a semi structured interview with each participant after the program. Results: Significant improvements were observed from PRE to POST for PSS (P=.03) and the SCS (P=.003), with continued improvements demonstrated at POST2. Although acceptance showed an improved trend from PRE to POST to POST2, only the ICQ helplessness scale was close to being significant (P=.05). Several secondary outcomes improved significantly from PRE to POST2 including FMI (P=0.003) and the PROMIS subscales of fatigue (P=.04) and sleep (P=.03). The qualitative findings supported the quantitative results and provided a deeper understanding of the impact on participants. Conclusions: These results demonstrate how a virtual mindfulness program adapted for stroke may benefit survivors including decreasing stress and increasing self-compassion. Although changes in acceptance were not significant, a trend of improvement from PRE to POST to POST2 was observed and worthy of further investigation. Significant improvements were also observed for secondary outcomes of fatigue and sleep. Virtual mindfulness programs offer a feasible and promising approach to help survivors move forward with life after stroke. Due to small sample size, results should be interpreted appropriately and further research is recommended. Clinical Trial: No registration
Background: Trusted paperless electronic medical record (TPEMR) management system serves as critical infrastructure for hospital governance and compliance under the National Health Commission’s Elec...
Background: Trusted paperless electronic medical record (TPEMR) management system serves as critical infrastructure for hospital governance and compliance under the National Health Commission’s Electronic Medical Record System Application Level Grading Evaluation (CN-EMR Grading) framework in China. Understanding user experience (UX) with system usability, trust, and sustained engagement may improve the accessibility, acceptability, and sustained adoption of the administrative health information system. Objective: The aim of this study was to use the Technology Acceptance Model (TAM) as a framework for qualitatively describing the UX, use behaviors, intent to use and intent to continue using among medical record administrators. Methods: We conducted a descriptive qualitative study guided by the extended Technology Acceptance Model (TAM) in a tertiary hospital achieving CN-EMR Grade 5. Through semi-structured interviews and focus groups with medical record administrators (n=23) and information system–related personnel (n=5), we examined perceived usefulness (PU), perceived ease of use (PEOU), intent to use and intent to continue using by a trained coder. To enhance analytic rigor and credibility, the coder met weekly with the principal investigator to review coding decisions, resolve ambiguities, and ensure that interpretations remained grounded in the data. Results: Participants acknowledged TPEMR's routine efficiency in retrieval, batch printing, and compliance reporting. However, exception-handling workflows triggered significant usability breakdowns. "Surface usability, deep complexity" emerged as a core pattern: standardized tasks felt intuitive, while deviations required opaque, multi-step workarounds across departments. Recurrent system instabilities and non-actionable error messages eroded psychological trust, fostering defensive behaviors like manual tracking despite mandatory use policies. Conclusions: TPEMR optimization must move beyond happy-path efficiency to address exception-path resilience. Embedding visible compliance mechanisms, streamlining cross-departmental coordination, and providing transparent error recovery pathways are critical to converting mandated use into genuine, sustainable engagement among medical record administrators.
Background: Traditional lecture-based education has shown limitations in engagement, knowledge retention, and skill transfer in healthcare training. Serious games and virtual simulations offer accessi...
Background: Traditional lecture-based education has shown limitations in engagement, knowledge retention, and skill transfer in healthcare training. Serious games and virtual simulations offer accessible and scalable solutions to enhance emergency medicine (EM) education. The GEMAS project (Gamificación en Enfermería y Medicina para el Aprendizaje por Simulación) was developed as a narrative-driven serious game integrating clinical reasoning, diagnostic decision-making, and evidence-based emergency management. Objective: This study aimed to describe its development and evaluate its usability, satisfaction, and educational impact. Methods: A pre–post single-center pilot study was conducted among physicians and nurses from a university hospital with no prior experience in serious games or high-fidelity simulation. Participants completed a 2–3-hour GEMAS gameplay session. Educational outcomes were assessed using Levels I and II of the Kirkpatrick model: (1) satisfaction and usability through a 10-item Likert questionnaire and the System Usability Scale (SUS); and (2) knowledge acquisition via an expert-validated pre- and post-intervention test covering key emergency scenarios. Statistical analyses included paired t-tests and Pearson correlations between knowledge improvement and age or professional experience. The level of statistical significance considered was 5%. Results: 22 healthcare professionals participated (31.8% physicians, 68.2% nurses; mean age 31 ± 7 years; 59% female). Satisfaction was high across all items (means >9/10), with no differences between professional categories. Median SUS was 87.25 overall (90 for physicians, 84.5 for nurses), with 77.3% giving grade A (>78.9, excellent usability). Knowledge scores improved significantly from pre- to post-intervention. Physicians improved from 48.5 ± 13.3 to 80.3 ± 15.2, and nurses from 23.5 ± 8.3 to 43.5 ± 15 (p < 0.001). No significant correlation was found between improvement and age (r = –0.08) or years of experience (r = –0.41). Conclusions: GEMAS demonstrated excellent usability, very high user satisfaction, and significant knowledge improvement among active healthcare professionals. Its design effectively enhances clinical reasoning and evidence-based decision-making, providing a scalable, low-cost complement to traditional simulation. Future multicenter studies will explore long-term learning transfer. Clinical Trial: NCT06516250
Background: Patient medical records remain frag-mented across hospitals, laboratories, and clinics, preventing clinicians from accessing complete longitudinal health information. Emergency blood a...
Background: Patient medical records remain frag-mented across hospitals, laboratories, and clinics, preventing clinicians from accessing complete longitudinal health information. Emergency blood and organ allocation further suffers from time delays that significantly increase mortality risk.
Conclusions: Objective: This study proposes and evaluates a national-scale centralized health record platform integrating standardized FHIR-based data aggregation, longitudinal artificial intelligence analytics, and emergency blood and organ donor discovery networks. Methods: All diagnostic laboratories mandatorily upload test results using HL7 FHIR standards to cre-ate unified patient records. Machine learning models including Random Forest classifiers, ARIMA time-series forecasting, geospatial matching, LSTM net-works, and explainable AI techniques were applied for donor eligibility, blood shortage prediction, and longitudinal disease tracking. Large-scale synthetic datasets were generated to simulate national deploy-ment scenarios. Results: The Random Forest model achieved 100% recall for donor eligibility detection. ARIMA forecasting predicted blood shortages with 89% accuracy, and geospatial matching identified compatible donors within a 5 km radius. Simulation of 2,000 emergency blood requests demonstrated a 76% re-duction in delivery time (58 to 14 minutes) and fulfillment improvement from 82% to 95%. Availability of rare blood types increased by 27–33%. Conclusions: Centralized FHIR-based health data combined with longitudinal AI analytics and real-time donor discovery networks can substantially improve emergency response, disease management, and healthcare equity at national scale.
Keywords: FHIR; electronic health records; med-ical informatics; machine learning; longitudinal disease analysis; emergency blood donation; interoper-ability
Background: Substance use disorder (SUD) is a chronic, relapsing condition characterized by compulsive substance use and dysregulation in reward and control systems. Although effective pharmacological...
Background: Substance use disorder (SUD) is a chronic, relapsing condition characterized by compulsive substance use and dysregulation in reward and control systems. Although effective pharmacological and psychosocial treatments are available, their impact is often limited by barriers such as stigma, poor adherence, and restricted access to care. Virtual Reality (VR) has emerged as a digital health intervention offering an adjunctive approach by providing immersive, interactive environments that may enhance engagement, simulate real-world triggers, and support therapeutic learning. Objective: This focus review aimed to map and synthesize the existing evidence for VR-based interventions in SUD treatment. We examine both therapeutic applications across established treatment frameworks and experimental approaches, identify key opportunities for future research and clinical innovation. Methods: We searched electronic databases including PubMed/MEDLINE, Science Direct and MDPI covering 2004 to 2025. Two reviewers independently screened for relevant studies and extracted study characteristics. Studies addressing VR applications for substance use disorders including peer-reviewed articles, randomized controlled trials, protocols and pilot studies published in English were selected. Any discrepancies were resolved through discussion. Results: A total of 26 studies or protocols were included in this review. Overall, the studies reviewed are broadly categorized into 6 sub-groups based on the type of the VR intervention and treatment class delivered. The reviewed literature indicates that VR-based cue exposure therapy is associated with reductions in craving and physiological reactivity for nicotine, alcohol, and cannabis use, with more limited and preliminary findings for opioid use disorder. VR relaxation and stress-management environments were linked to decreases in craving, stress, and pain among individuals with opioid and alcohol use disorders. VR-enhanced cognitive-behavioral interventions showed improvements in attention, cognitive flexibility, and emotion regulation. Motivational, social skills, and gamified VR interventions were associated with increased engagement, reduced stigma, enhanced self-efficacy, and improved treatment retention. Conclusions: This focus review contributes to the growing digital health literature by synthesizing current evidence on VR-based interventions for SUDs. The findings suggest that VR may serve as a flexible adjunct to existing treatments, with the potential to address persistent barriers to engagement and access. Further rigorously designed studies are needed to evaluate long-term effectiveness, optimize VR design, and support their integration into routine clinical practice.
Background: Chronic disease risk factors including smoking/vaping, poor nutrition, alcohol misuse and physical inactivity, as well as falls (SNAPF), have a significant impact on population health. Del...
Background: Chronic disease risk factors including smoking/vaping, poor nutrition, alcohol misuse and physical inactivity, as well as falls (SNAPF), have a significant impact on population health. Delivering preventive care using evidence-based models (eg, Ask, Advise, Help (AAH) model) during clinical consultations is recommended and can reduce SNAPF risks. Rates of preventive care delivery within clinical consultations are variable, with barriers including limited time and competing priorities. One solution to increase preventive care delivery is using hybrid approaches that combine digital and clinician-delivered care. Objective: We aimed to test the use and acceptability of an online preventive care tool based on the AAH model and delivered through a hybrid care approach from the perspective of Community Health clients and clinicians. Methods: A convenience sample of adult clients with an upcoming appointment at two Australian Community Health services were sent an SMS containing a link to the online tool. The tool ‘Asked’ about SNAPF risk factors, and provided ‘Advice’ and ‘Help' via a summary message and information sheets. Data on use and acceptability was collected via analytics, semi-structured telephone interviews with clients, and semi-structured online interviews and focus groups with clinicians. Data analysis was conducted using descriptive statistics for quantitative data and thematic analysis for qualitative data. Results: Forty-three participants (56% female, mean age 55.0) completed the tool, out of 76 who received it (57%). Fifty-two participants who received the tool completed a semi-structured telephone interview (68%). Most participants found it acceptable to receive the tool via SMS (87%) and for the tool to provide ‘Advice’ and ‘Help’ (91%), although a smaller proportion of participants who completed the tool recalled the summary message (66%) or engaged with the information sheets (20%-53%). The main reasons reported for not completing the tool included receiving it at an inconvenient time, not being good with online forms, and being wary of opening links. Clinician feedback (n=7) highlighted client use barriers (eg, concerns about scams) and enablers (eg, assistance from family), as well as positive feedback on the tool itself (eg, clients receiving enhanced advice). Conclusions: The online preventive care tool was used by over half of the clients to whom it was sent, and was acceptable to Community Health clients and clinicians. There is an opportunity to use digital tools to help enhance preventive care within clinical care.
Background: Young adults experience declining physical activity during the transition to adulthood, highlighting the need for engaging and tolerable exercise modalities. Immersive virtual reality (VR)...
Background: Young adults experience declining physical activity during the transition to adulthood, highlighting the need for engaging and tolerable exercise modalities. Immersive virtual reality (VR) exergaming has emerged as a promising strategy, yet comparative evidence regarding how different VR boxing platforms perform relative to conventional video-based exercise remains limited in terms of physical activity intensity, enjoyment, and cybersickness. Objective: The objective of this study was to compare physical activity outcomes, enjoyment, and cybersickness across two immersive VR boxing exergames (Supernatural, FitXR) and conventional video shadowboxing in young adults. Methods: Thirty-one undergraduate students (mean age = 19.97 ± 1.02 years; 19 females) completed three 20-minute exercise conditions in a randomized, counterbalanced, within-subject design. Physical activity was measured using wrist-worn accelerometry, with primary outcomes including metabolic equivalents (METs) representing relative energy cost of activity, moderate-to-vigorous physical activity (MVPA), light physical activity, and step count. Enjoyment was assessed using the Physical Activity Enjoyment Scale, and cybersickness symptoms were measured using the Simulator Sickness Questionnaire. Repeated-measures ANOVAs were conducted to examine differences across conditions, with Bonferroni-adjusted pairwise comparison for significant main effects. Results: No significant differences were observed across exercise conditions for METs, step count, or light physical activity (P > .05). MVPA differed significantly by condition (P = .015), with FitXR eliciting greater MVPA than both Supernatural and conventional video shadowboxing. Despite lower MVPA, Supernatural was rated as significantly more enjoyable (P = .002). Cybersickness symptoms did not differ significantly across conditions (P = .261). Conclusions: Acute exercise intensity and enjoyment differed across boxing-based exercise platforms in young adults. FitXR elicited greater time spent in MVPA, whereas Supernatural elicited greater enjoyment despite lower MVPA. Cybersickness symptoms did not differ across conditions, indicating that immersive VR boxing was well tolerated during short-duration exercise sessions. These findings suggest that platform-specific game design features influence whether VR boxing interventions preferentially support higher exercise intensity or greater enjoyment, and that platform selection should align with intervention goals.
Background: Mobile health (mHealth) app effectiveness may be limited by low engagement. Increasing understanding of factors influencing engagement may help. Paid mHealth app subscription and renewal a...
Background: Mobile health (mHealth) app effectiveness may be limited by low engagement. Increasing understanding of factors influencing engagement may help. Paid mHealth app subscription and renewal are two metrics of particular interest to commercial app developers. Objective: Objective: To identify homogenous user subgroups (ie, behavioral phenotypes) within a paid mHealth app context and examine associations with app subscription and renewal. Methods: Methods: In this 6-month prospective cohort study, latent class analysis (LCA) was conducted with users of a paid mHealth app. Users completing a 7-day free trial between November 2023 and January 2024 were included. LCA produced phenotypes using survey responses (eg, chronic disease status), device-assessed health data (eg, daily step count), and 7-day free trial period engagement data (eg, number app opens). Odds ratios (ORs; P < .05) assessed associations between phenotypes and subscription/renewal. Results: Results: The sample included 934 users (mean age, 41.53 [SD, 9.65] years). Based on LCA fit indices five distinct phenotypes were formed: (1) Highly engaged subscribers, (2) Subscribers with multimorbidity, (3) Healthy subscribers, (4) Non-subscribers with multimorbidity, and (5) Healthy non-subscribers. Phenotypes 1–3 had greater odds of subscribing (OR = 21.31 [8.56, 53.06]; OR = 7.11 [4.04, 12.50]; OR = 8.28 [4.26, 16.08], respectively) than phenotype 4 (OR = 0.82 [0.48, 1.41]), compared to phenotype 5, the reference scenario. Additionally, renewal odds for phenotypes 1–4 were 1.06 [0.62, 1.81], 0.90 [0.54, 1.49], 0.99 [0.58, 1.69], and 0.93 [0.48, 1.80], respectively (vs. reference). Conclusions: Conclusions: Behavioral phenotypes associated with subscription likelihood were identified using data collected during the 7-day trial period. These phenotypes may be strategically targeted with future intervention to boost early engagement and long-term behavior change potential.
Background: Men who have sex with men (MSM) and also inject drugs represent a subgroup facing compounded risks through both sexual transmission networks and parenteral exposure via contaminated inject...
Background: Men who have sex with men (MSM) and also inject drugs represent a subgroup facing compounded risks through both sexual transmission networks and parenteral exposure via contaminated injection equipment and acts as a risk factor and increase the vulnerability to Sexual Transmitted Diseases, including Human Immunodeficiency Virus. Objective: This study aims to estimate the HIV prevalence and associated factors among injecting drug MSM (ID- MSM) and non- injecting drug MSM (NID-MSM) in India. Methods: This is secondary data analysis of MSM data from the National Integrated Biological and Behavioral Surveillance (IBBS) survey. MSM-specific data collected in 2014-15 from 24 of the 36 States and Union Territories (UTs). Respondents who reported injecting drugs for non-medical reasons in the last 12 months were classified as injecting drug MSM (ID- MSM), others as non –injecting MSM (NID-MSM). Results: A total 23,081 MSM were included in the analysis. Out of which, 3.9 % MSM reported injecting drug use. Factors like increasing age (aOR = 1.70, 95% CI: 1.26–2.29 for 25–34 years), who aged ≥35 years (aOR = 2.75, 95% CI: 2.00–3.78), widowed/divorced/separated (aOR = 0.52, 95% CI: 0.29–0.93), involvement in sex work (aOR = 2.73, 95% CI: 1.68–4.42), first sex before the age of 18 years (OR = 1.42, 95% CI: 1.11–1.82) and selling sex to men ( aOR = 1.38, 95% CI: 1.10–1.72) were associated with ID- MSM.
While, currently married (aOR = 2.16, 95% CI: 1.10–4.27), sex work odds (aOR = 3.41, 95% CI: 1.33–8.78), experienced physical violence (aOR = 1.57, 95% CI: 0.82–3.00), associated with NID – MSM. Conclusions: The findings of this study demonstrate ID-MSM experiencing a modest but non-significant elevation in prevalence compared with NID-MSM. More importantly, the determinants of HIV differed between these groups. While sex work and marital status were key predictors of HIV among ID-MSM; increasing age, early sexual debut, transactional sex, and inconsistent condom use were major drivers among NID-MSM. These findings highlight that targeted harm-reduction services for ID-MSM and strengthened behavior-focused interventions, including condom promotion, PrEP access, and early sexual health education for NID-MSM, are essential. Addressing structural barriers such as stigma and economic vulnerability remains critical for reducing HIV transmission within these diverse MSM populations.
Background: Despite Iran’s competitive advantages in medical costs and surgical expertise, the medical tourism industry suffers from fragmented service delivery and a lack of standardized competenci...
Background: Despite Iran’s competitive advantages in medical costs and surgical expertise, the medical tourism industry suffers from fragmented service delivery and a lack of standardized competencies among stakeholders. Objective: This study aims to develop and validate a localized »Skill Enhancement Framework« for Iran’s medical tourism workforce. Methods: A multiphase mixed-methods design is employed. Phase I (Scoping Review) has mapped global competencies. Phase II involves qualitative semi-structured interviews to identify localized needs and socio-economic barriers. Phase III utilizes the Delphi technique to reach expert consensus and validate the final framework. Results: no result Conclusions: By integrating evidence-based findings with expert insights, this protocol provides a methodological roadmap to professionalize the value chain, ensuring the sustainability and global competitiveness of Iran’s medical tourism brand
Background: Mental health problems are a significant global health challenge, with the majority manifesting during the crucial developmental phase of adolescence. Factors like childhood abuse, socioec...
Background: Mental health problems are a significant global health challenge, with the majority manifesting during the crucial developmental phase of adolescence. Factors like childhood abuse, socioeconomic conditions, and hostile school environments worsen the mental health problems among adolescents, resulting in severe consequences, including violence, substance abuse, and reduced academic performance. Schools play a crucial role in implementing mental health interventions, offering unique access to a diverse group of adolescents within their familiar learning environment. Objective: This review aims to synthesize the existing literature on interventions designed to support adolescents facing mental health challenges in secondary schools, including the role of school-based support teams (SBST). Methods: This scoping review will follow the framework established by Arksey and O'Malley (2005) and will adhere to a five-step process: (1) identifying the research topic; (2) locating relevant studies; (3) selecting studies; (4) charting the data; (5) compiling, summarizing, and reporting the findings. Selection of articles will be detailed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-ScR) guidelines. Studies published in English from 2020 to 2025 will be included. A comprehensive search will be done across several databases, including PubMed, Scopus, MEDLINE ULTIMATE, ScienceDirect, Google Scholar, and OpenGrey. Using a standardized tool, two reviewers will independently screen the titles, abstracts, and full texts, and extract data to enhance reliability. If they disagree on something, the third reviewer will mediate to enhance consensus. The selection process for the studies included in the review is scheduled to be completed within a 10-week timeframe. This process will strictly follow the comprehensive guidelines established in the PRISMA-ScR checklist.
Results: Preliminary database search has been conducted across four databases, PubMed, Scopus, ScienceDirect, and MEDLINE Ultimate, and a total of 2160 records were identified. Duplicates removal and abstract screening are currently in progress. The study is anticipated to be published in June 2025. Results: Preliminary database search has been conducted across four databases, PubMed, Scopus, ScienceDirect, and MEDLINE Ultimate, and a total of 2160 records were identified. Duplicates removal and abstract screening are currently in progress. Conclusions: This scoping review aims to identify interventions that support adolescents with mental health issues, specifically focusing on Africa, and more particularly on the South African context. This focus will help uncover the cultural and contextual factors that influence mental health interventions in this region. In South Africa, accessing mental health services can be difficult, especially in under-resourced communities. School-based mental health interventions have been recognized as an effective solution, as they can reach many adolescents in a cost-efficient manner. These interventions take place in a familiar environment for adolescents and provide a setting that reduces the stigma associated with seeking mental health services.
The findings will provide insights into the range of interventions aimed at supporting adolescent learners in secondary schools. By analyzing current literature, the researchers aim to highlight the different types of school-based interventions available, evaluate their effectiveness, and identify any barriers that prevent adolescents from accessing mental health support. Additionally, the review will explore closely the role of school-based support teams and assess their effectiveness in assisting adolescent learners with mental health problems within the school context. The expected outcome of this scoping review is to deliver a comprehensive overview of mental health interventions intended to support adolescents in secondary schools. Clinical Trial: The complete protocol and supplementary details of the review are publicly accessible via the following URL: https://osf.io/eb3ua
Background: The cancer care pathway can cause or accentuate inequalities. It is therefore necessary to identify patients with such social vulnerabilities as early as possible and take them into accoun...
Background: The cancer care pathway can cause or accentuate inequalities. It is therefore necessary to identify patients with such social vulnerabilities as early as possible and take them into account throughout the treatment process. The DEFCO (Detection of Social Frailty and Cancer Patient Care Pathway Coordination) tool has been created by a public health research team and industrial engineering research team and has previously shown its validity. The transferability and possibilities of implementation in other structures of this tool, developed in a specialized institution, must now be proven. It is also necessary to evaluate it in terms of impact on the fluidity of care paths and on the social impacts of the disease. Objective: The objective is to assess the implementation of the DEFCO tool for identifying social vulnerability in new centers. Methods: This is a multicenter prospective cohort implementation study using a mixed-methods effectiveness-implementation design to evaluate a complex intervention. This study was conducted in five sequential stages. First, the organizational contexts of each participating center and their capacity to implement the DEFCO tool were assessed. Second, key stakeholders were trained to integrate the DEFCO tool into routine clinical practice. Third, a pre-implementation analysis was conducted using the Consolidated Framework for Implementation Research (CFIR) to identify contextual determinants influencing implementation. Fourth, the DEFCO tool was deployed in each center. Finally, the implementation process and outcomes wereevaluated using the Reach, Effectiveness, Adoption, Implementation, and Maintenance (Re-AIM) framework for quantitative measures, complemented by qualitative assessments guided by the CFIR. Results: This study enrolled 437 patients. In addition, 41 professionals were interviewed prior to the study and 21 following its completion. The results are currently under analysis and are expected to be available in the second half of 2026.
The results of this implementation study will provide information on: 1) the effectiveness in real life of the DEFCO tool, the objective of which is to identify social vulnerability in new cancer patients; 2) the impact of the modifications made to the initial tool to adapt to the contexts and the differences in practice according to the populations being cared for and therapeutic practice; 3) the key success factors and the pitfalls to be avoided, interacting with the effectiveness of the tool, 4) the modifications in the representations of social vulnerability and its consequences brought about by the implementation of the tool. Conclusions: Thanks to the implementation study, the generalization of this tool will be accompanied by instructions for use and the contextual elements necessary for optimal operation of the systematic detection of social vulnerability in the care pathway of patients treated for cancer in health care institutions of varying status and activity. Clinical Trial: NCT04015895
Background: Digital health interventions, including mobile applications and wearable devices, have emerged as promising tools to promote physical activity (PA) and support chronic disease management i...
Background: Digital health interventions, including mobile applications and wearable devices, have emerged as promising tools to promote physical activity (PA) and support chronic disease management in primary care. However, evidence remains limited regarding the real-world feasibility, patient engagement, and integration of these tools into routine family medicine practice, particularly in individuals with metabolic syndrome. In addition, physicians’ attitudes and intentions toward telemedicine may influence the successful adoption of such interventions. Objective: This study aims to evaluate the preliminary effectiveness of a mobile health intervention for PA monitoring in individuals with metabolic syndrome managed in primary care. Secondary objectives include assessing user engagement and adherence to the intervention, changes in PA-related and clinical outcomes, and exploring family physicians’ attitudes and behavioral intentions toward telemedicine. Methods: This is a two-arm, parallel, pilot randomized controlled trial conducted in primary care units in the Coimbra region, Portugal. Eligible adults with metabolic syndrome will be randomized (1:1) to an intervention group receiving access to a mobile application integrated with an activity-tracking wristband or to a control group receiving usual care. Data will be collected at baseline and 6 months. Primary outcomes include changes in PA levels and physical literacy, assessed through validated questionnaires and objective analitical results. Secondary outcomes comprise health-related quality of life, cardiometabolic parameters, adherence and engagement metrics derived from app usage, retention and dropout rates. Physicians’ attitudes and intentions regarding telemedicine will be assessed using the Physician Attitudes and Intention to use Telemedicine (PAIT) questionnaire. Analyses will primarily be descriptive and exploratory, aiming to inform the design of a future full-scale trial. Results: Participant recruitment is planned to begin in July 2025. This pilot trial will generate data on feasibility, adherence, engagement, and preliminary behavioral and clinical outcomes associated with the use of a mobile PA monitoring intervention in primary care. Conclusions: This study will provide important insights into integrating mobile health applications for PA promotion among individuals with metabolic syndrome in routine primary care. The findings will inform future larger randomized controlled trials and contribute to implementation strategies for digital health interventions in family medicine. Clinical Trial: EECC-2024_4-220 e 335/25 CE
During the COVID-19 pandemic, large-scale pathogen sequencing generated millions of SARS-CoV-2 genomes deposited in repositories like GenBank and GISAID. However, most of these records lack detailed p...
During the COVID-19 pandemic, large-scale pathogen sequencing generated millions of SARS-CoV-2 genomes deposited in repositories like GenBank and GISAID. However, most of these records lack detailed patient metadata, such as demographics and clinical outcomes, which limits their utility for large-scale pathogen genomics analyses. While records that are linked to a journal publication might contain such metadata, systematic extraction and linkage to sequence records requires substantial manual effort. In this work, we assess the completeness of metadata in GenBank and demonstrate the value of enriched clinical and demographic annotations for genomic epidemiology. We found that on average GenBank records contained only 21.6% of host metadata, and during our study period ~0.02% of published articles provided accessible sequence-specific patient metadata. Additionally, using published SARS-CoV-2 genomes and their corresponding journal articles, we constructed an analytical use case in pathogen genomics in which host stratification by clinical and demographic factors enables examination of evolutionary dynamics and clinical outcomes. Our results demonstrate how metadata-enrichment enhances pathogen genomic studies and provide a framework applicable to other pathogens.
Background: Community health workers (CHWs) play a vital role in delivering pediatric care in resource-limited settings, yet evidence on acceptable approaches for recurrent training remains limited....
Background: Community health workers (CHWs) play a vital role in delivering pediatric care in resource-limited settings, yet evidence on acceptable approaches for recurrent training remains limited. Mobile health (mHealth) training tools have demonstrated promise in enhancing skill acquisition and retention among CHWs; however, little is known about which specific design features optimize learning and sustained use over time. Objective: This study evaluates learning outcomes, engagement patterns, and user experiences associated with three mHealth training modalities for CHWs in Northern Uganda. Methods: We conducted a convergent mixed methods study within an established community-led CHW training program. Over eight months, CHWs in Northern Uganda were assigned to one of three mHealth training approaches: 1) a standard self-guided tablet application (‘standard’ group), 2) a gamified application with assessment-gated progression (‘gamified’ group), and 3) the standard application supplemented with simulation-based training (‘standard + simulation’ group). Quantitative outcomes included 1) written multiple-choice exams at baseline (T1), two months (T2), and eight months (T3), with competency defined as scores >80%, 2) a clinical skills assessment at eight months, and 3) tablet engagement analytics, including video views, in-quiz attempts, and quiz scores. Qualitative data were collected through semi-structured interviews and analyzed thematically. Quantitative and qualitative findings were integrated using joint displays. Results: Out of 30 eligible CHWs approached, all agreed to participate. Over the study period, six CHWs left the training program and were excluded from all analyses; the remaining 24 CHWs completed qualitative interviews and were included in tablet engagement analyses (standard: N=8; gamified: N=10; standard + simulation: N=6). 21 CHWs completed written exams at all three timepoints and were included in exam score analyses. Median written exam scores improved in the overall sample, increasing from 73% (IQR 26.67) at baseline (T1) to 100% (IQR 6.67) at eight months (T3) (p < 0.001), with no differences in the median magnitude of score improvement observed across training modalities (16.67 vs. 26.67 vs 26.67, p=0.64). All CHWs demonstrated competency in advanced pediatric clinical skills at study completion. The gamified application was associated with higher rates of video viewing and in-app quiz attempts per active day but did not result in higher in-app quiz pass rates or final exam scores compared with the standard application. Those who received the simulation reported greater confidence and perceived preparedness despite similar quantitative performance. Engagement declined modestly over time (from 77% to 58% of CHWs engaged weekly), consistent with qualitative reports of time constraints and technical barriers, including limited access to electricity for tablet charging. Conclusions: Findings suggest that mHealth-supported training can facilitate sustained acquisition of advanced pediatric clinical skills among experienced CHWs in a rural, resource-limited setting. These findings can inform the user-centered design of future training interventions.
Background: The growing burden of HIV/AIDS, particularly in sub-Saharan Africa, presents a significant public health challenge, characterized by increasing morbidity, and mortality rates. This region...
Background: The growing burden of HIV/AIDS, particularly in sub-Saharan Africa, presents a significant public health challenge, characterized by increasing morbidity, and mortality rates. This region is disproportionately affected, bearing for two-thirds of the global HIV/AIDS problem, highlighting an urgent need for effective solutions. Accurate forecasting of new HIV infections is crucial for developing targeted interventions to combat the HIV/AIDS pandemic. Objective: This study aims to forecast trends of new HIV infections for the next five years and identify the contributing factors in the East Gojjam Zone. Methods: DHIS2 (2018-2025) data set from East Gojjam zone were analyzed using to a hybrid machine learning and deep learning framework. Machine learning models (Decision Tree, Random Forest, XGBoost, LightGBM, CatBoost, AdaBoost, and Gradient Boosting) were used for feature selection, and deep learning architectures (RNN, LSTM, GRU, and bidirectional variants) were used for time-series forecasting. Model performance was assessed using MAE, MSE, RMSE and MAPE Results: From the seven machine-learning algorithms used for selecting important futures the random forest was best performed model and many features were selected to apply for further forecasting using deep learning algorithms. Bidirectional LSTM model was best performed model among the six sequential deep learning algorithms used for forecasting HIV infection in East Gojjam zone. Forecasts reveal an upward trend of HIV infection in study area. Conclusions: Combination of Machine learning and Deep learning algorithms method shows high predictive accuracy in forecasting of HIV infection. The forecasted trend shows an upward trend and needs urgent intervention and attention to combat the problem.
Background: It is increasingly common that patients are given the option to receive health information using digital technology. Augmented reality (AR) is an emerging technology which may enable patie...
Background: It is increasingly common that patients are given the option to receive health information using digital technology. Augmented reality (AR) is an emerging technology which may enable patients to better appreciate anatomy pertinent to their disease process. Objective: Our objective was to understand patients’ perspectives about augmented reality in the context of shared decision making (SDM) for oncoplastic breast surgery. Methods: Three focus groups with a total of 17 participants without breast cancer were recruited from general surgery out-patient clinics in a university teaching hospital. Participants interacted with a Microsoft HoloLens 2™ head-mounted display presenting an anonymised three-dimensional holographic model of a breast cancer, which was used as a stimulus to prompt discussion. Anonymised interview audio transcripts were transcribed verbatim and analysed using thematic analysis. Results: Analysis revealed four themes: 1) Seeing as believing – AR enhanced participants ability to visualise abstract anatomy and aid understanding, 2) Being in the surgeon’s shoes – the technology offered insight into the clinicians perspective, with concerns raised about the emotional impact, 3) How technology influences trust – AR reinforced confidence in the shared decision-making process when introduced by trusted clinicians, 4) Involving people in my life – shared viewing with family or friends offered support in the decision-making process. Conclusions: Participants viewed AR as a promising tool to enhance knowledge and add value in the process of SDM beyond the reach of current information giving. They also expressed caution, emphasising the need for careful introduction and ongoing clinician support to ensure meaningful use. Their varied responses highlighted the challenge inherent to introduction of digital technologies such as AR – the question of how much, and what type of technology best supports patients without causing information overload. Trust in clinicians remained central to the perceived value of the technology. These findings highlight AR’s potential to enhance SDM when thoughtfully integrated into clinical practice. Clinical Trial: N/a
Background: NYU Langone Health (NYULH) operates one of the largest remote patient monitoring (RPM) programs in the United States. Its hypertension management initiative (NYULH RPM HTN) supports approx...
Background: NYU Langone Health (NYULH) operates one of the largest remote patient monitoring (RPM) programs in the United States. Its hypertension management initiative (NYULH RPM HTN) supports approximately 4,500 patients monthly and captured over 100,000 remote blood pressure (BP) readings in 2024. Despite its benefits, the program faces real-world challenges, including patient disengagement, device usability issues, and clinician burden from high data volume. Generative AI (GenAI), particularly large language models (LLMs), offers opportunities to enhance patient engagement and streamline clinical workflows through personalized conversational interfaces such as chatbots and its data summarization capabilities. Objective: To explore the feasibility of using GenAI to support RPM, we developed the AI Brain, an electronic health record (EHR)–integrated GenAI layer to support RPM for hypertension management. AI Brain includes a patient-facing agent, chatbot designed to support engagement and blood pressure (BP) adherence, as well as a clinician-facing agent that generates smart content for EHR documentation and drafts patient messages. This study was conducted at NYULH, an academic medical center, providing a unique setting to evaluate the tool within a large-scale hypertension RPM program and to assess its impact on patient engagement, data interpretation, and clinical workflow efficiency. Methods: Our multidisciplinary team—comprising researchers, software engineers/architects, UX designers, and physicians—developed the AI Brain using a user-centered design approach and agile software development methods. We established patient and clinician advisory committees and conducted workshops during the formative phase to understand workflows and co-design solutions in collaboration with stakeholders. This was followed by a software development cycle that engaged advisory committee members at each stage to ensure the tool met user needs. Implementation considerations included usability, data privacy, clinical integration, and alignment with existing RPM processes. Results: The evaluation of the AI Brain demonstrated feasibility for integration into an established RPM infrastructure. Early observations suggest that the patient-facing agent showed potential to address common engagement barriers, including missed blood pressure submissions and device-related challenges. The clinician-facing agent supported care teams by summarizing key patient trends and reducing manual data review burden. Moreover, structured survey results indicated positive acceptability and perceived usefulness of GenAI-generated content. Security evaluations further demonstrated robust safeguards and reliable system performance. Conclusions: GenAI represents a promising approach to enhancing RPM, as demonstrated by its evaluation and adaptation within the NYULH hypertension management program. We described our development process and showed that, based on our evaluation, thoughtfully designed and integrated GenAI tools may help bridge gaps in patient workflows in terms of engagement and adherence as well as support clinical workflow to reduce data analysis and data summarization. Further evaluation is needed to assess long-term clinical outcomes, patient trust, and scalability in real-world healthcare settings.
Background: As oncology workflows integrate increasingly autonomous artificial intelligence (AI) agents, health systems face uncertainty regarding operational impacts. Traditional linear forecasting m...
Background: As oncology workflows integrate increasingly autonomous artificial intelligence (AI) agents, health systems face uncertainty regarding operational impacts. Traditional linear forecasting methods fail to capture second-order effects such as governance saturation, induced demand, and bottleneck migration. To navigate this complexity, the emerging field of Medical Futures Studies requires methodologies that bridge qualitative strategic foresight with quantitative operational modeling. These system-level dynamics directly influence patient access to timely diagnosis and treatment, with direct consequences for patient access, treatment delays, and health system resilience. Objective: To develop a proof-of-concept framework for stress-testing AI adoption strategies in oncology by coupling qualitative scenario planning with computational discrete-event simulation (DES). Methods: We defined a strategic state space using two orthogonal axes, AI automation intensity and data interoperability, resulting in four distinct futures scenarios. We translated these qualitative narratives into a quantitative DES model to simulate a 3-year operational horizon. The model quantified system performance (Referral-to-Treatment Interval [RTTI], throughput), volatility, and resource constraints across different adoption trajectories. Results: The scenario planning phase yielded four operational archetypes (analog oncology, automation islands, interconnected clinicians, and AI-orchestrated care) with distinct constraints, risks and failure modes. In the simulation, the fully integrated scenario maximized capacity (1,244 patients/year) and halved the mean RTTI to 14.9 days, a magnitude comparable to major pathway redesign interventions. Isolated automation without data infrastructure led to reduced system performance, increasing RTTI by 26% (37.1 days) and reducing throughput to 647 patients/year due to administrative governance saturation. The model demonstrated a structural bottleneck migration: successful upstream AI adoption shifted binding constraints from diagnostic scanners to downstream chemotherapy infusion units, while missing data interoperability resulted in governance constraints. Pathway optimization analysis indicated that a coordinated strategy prioritizing early improvements in data interoperability reduced transition volatility compared to an automation-first approach. Conclusions: Integrating qualitative scenario planning with quantitative simulations enabled a systematic evaluation of oncology AI adoption strategies. As a proof of concept, it offers a replicable framework for health leaders to model future scenarios of digital transformation in times of high uncertainty. Subsequent work should expand this methodology to incorporate financial and health equity dimensions, establishing simulation-based scenario planning as an important tool in Medical Futures Studies.
Background: Plasmodium vivax (P. vivax) has emerged as the primary cause of malaria in Cambodia. Achieving malaria elimination and securing malaria-free certification requires a focused effort on addr...
Background: Plasmodium vivax (P. vivax) has emerged as the primary cause of malaria in Cambodia. Achieving malaria elimination and securing malaria-free certification requires a focused effort on addressing P. vivax malaria. This is essential because the elimination of P. vivax often lags behind that of Plasmodium falciparum, making it a critical component in the overall strategy. Objective: This study will assess the feasibility of the Mass Drug Administration (MDA) and P. vivax Serological Testing and Treatment (PvSeroTAT) integrated with Reactive Case Detection (RACD) in two of the highest malaria burden operational districts of Cambodia and examine the potential for integrating these two approaches with existing malaria elimination efforts. Methods: This study employs an observational, prospective cohort design. MDA with chloroquine (CQ) will be conducted in Stung Treng through four monthly rounds, while RACD with PvSeroTAT will be implemented in Sen Monorom, targeting households near confirmed P. vivax cases. Data on coverage, compliance, cost, and stakeholder perceptions will be collected through surveys, interviews, and malaria case monitoring. A Composite Feasibility Index will integrate quantitative and qualitative indicators. Cost and budget impact analyses will assess scalability for malaria-endemic districts. Results: This study was funded by Medicines for Malaria Venture and approved by the National Ethics Committee for Health Research (NECHR) in Cambodia on 26 February 2025 (No. 085 NECHR). The study implementation began in March 2025. Training of study staff and healthcare workers was conducted between March – May 2025. Participant enrolment for MDA and RACD began in April and ended in October 2025. Altogether, 4443 and 3371 participants were recruited in MDA and RACD, respectively. Data analysis will be completed after the end of regular follow-ups by April 2026. Conclusions: Innovative and targeted public health approaches and tools are necessary to ensure the elimination of the malaria parasite reservoir, including the hidden hypnozoites. While MDA with CQ clears active blood-stage infections leading to immediate reductions in malaria prevalence, PvSeroTAT can detect past exposure to P. vivax by using serological markers allowing for targeted treatment of individuals at risk of developing relapsing infections with an 8-aminoquinoline. This helps reduce the parasite reservoir more efficiently. This study will provide insight into operational feasibility, implementation costs, community acceptance, and long-term sustainability. The findings will guide Cambodia’s malaria elimination efforts through improved surveillance and targeted interventions. Clinical Trial: OSF Preregistration: https://doi.org/10.17605/OSF.IO/5KZH7, retrospectively registered 15 October 2025.
Background: We aimed to quantify changes in internal data completeness following introduction of annual regional surveillance quality reports, and assess acceptability and perceived usefulness amongst...
Background: We aimed to quantify changes in internal data completeness following introduction of annual regional surveillance quality reports, and assess acceptability and perceived usefulness amongst CCAA. Objective: We aimed to quantify changes in internal data completeness following introduction of annual regional surveillance quality reports, and assess acceptability and perceived usefulness amongst CCAA. Methods: This mixed-methods evaluation consisted of two stages. First, we quantitatively assessed the internal completeness of 40 key TB variables all reported cases from 2018-2023, comparing two periods: before implementation of feedback reports (2018–2020) and after (2021–2023). Mean completeness for each variable was calculated for each period, and the differences compared using either the paired t-test or Wilcoxon signed rank test depending on whether differences were normally distributed. Analyses were conducted nationally and by CCAA, and for different variable groups (patient-, illness-, and laboratory-related). For the second stage, we circulated the results to regional TB surveillance focal points, along with a survey to assess acceptability, perceived usefulness of reports, and barriers to data completeness. Results: There were 25,299 reported cases across the study period, 13,328 in period 1 (2018-2020) and 11,901 in period 2 (2021-2023). Nationally, mean completeness increased from 66.0% in period 1 to 76.2% in period 2 (+10.2%, p<.001). Improvements were greatest for laboratory-related (+13.9%, p=.001) and illness-related variables (+12.8%, p<.001), while patient-related variables showed minimal change which was not statistically significant (+2.4%, p=.48). Most CCAA (84.2%, 16/19) demonstrated improved completeness in period 2, though substantial regional variation persisted. Of the 19 CCAA, 18 responded to the survey. Respondents reported high acceptability of the surveillance system and considered feedback reports useful. Challenges highlighted included resource constraints, system interoperability between laboratory and surveillance, and patient follow-up. Conclusions: Following introduction of annual feedback reports, which were well accepted by all regional stakeholders, Spain saw significant improvements in TB data completeness. Sustained feedback mechanisms, streamlined reporting requirements, and continued system integration are key to strengthening TB surveillance.
A multi‑layered fraud‑mitigation approach is essential to ensure data integrity in medical survey research; basic measures alone (e.g. captcha) would permit widespread fraud....
A multi‑layered fraud‑mitigation approach is essential to ensure data integrity in medical survey research; basic measures alone (e.g. captcha) would permit widespread fraud.
Background: Health profession education students exhibit a higher rate of excessive digital technology use compared to their peers. Although the interaction of technology with student well-being has b...
Background: Health profession education students exhibit a higher rate of excessive digital technology use compared to their peers. Although the interaction of technology with student well-being has become more pronounced, the lack of awareness about digital detox among students in technology-intensive healthcare disciplines, along with the scarcity of studies exploring their practices, is concerning. Objective: This study aimed to investigate the patterns of social media usage and potential relationships between digital detox practices, mental well-being, physical health, and academic performance. Methods: A cross-sectional survey design was employed at King Saud bin Abdulaziz University for Health Sciences (KSAU-HS) in Riyadh. The sample consisted of 471 students from the health professions. Validated surveys were used, including the Social Media Disorder Scale, Digital Detoxification Awareness Questions, Kessler Psychological Distress Scale (K-6), and physical health assessments. The relationships between the study variables were analyzed using the chi-square test and ANOVA, with a significance level of 0.05. Results: A total of 471 students were included, with the majority being female (n = 291, 61.8%), single (n = 440, 93.4%), and aged between 18 and 37 years (M = 21.62, SD = 2.30). Participants reported an average daily social media usage of 7.07 ± 4.11 hours, with 31.6% of the sample classified as problematic users. Digital detox awareness was 59.7%, and 58.6% reported having experienced a digital detox. The most common strategies reported were avoiding phone use (69.1%) and muting notifications (70.3%). Participants reported eye strain (59.0%), neck pain (56.7%), and back pain (49.7%) due to the use of smartphones. Significant associations were found between social media use, gender, college affiliation, awareness of digital detox, level of physical activity, and sleep patterns (p < 0.005). A positive correlation was found between GPA and digital detoxification (p = 0.01). Social media use was significantly associated with the mental well-being of the participants (F = 214.096, p < 0.001) and with their academic performance (p = 0.04) Conclusions: The relationships between digital behavior, physical health, mental well-being, and academic performance of health profession students are complex and intertwined. The practice of digital detox, as observed, offers improvements in various aspects of students' lives; therefore, incorporating digital wellness strategies into the curriculum is vital for preparing students as professionals and enhancing student outcomes. Clinical Trial: NRR24/007/11
Background: Real-world gait assessment has gained momentum in populations with walking impairments, offering insights beyond standardized tests and supporting the integration of remote monitoring into...
Background: Real-world gait assessment has gained momentum in populations with walking impairments, offering insights beyond standardized tests and supporting the integration of remote monitoring into clinical care. However, the full potential of wearable sensors remains limited by the lack of validated population- and context-specific digital biomarkers. Objective: The primary objective was to update the state of the art and summarize challenges in real-world gait assessment for PD and stroke. The secondary objective was to report pooled means and standard deviations of gait parameters. Methods: PubMed and Scopus were searched for English-language studies published up to December 31, 2024. Eligible studies included a minimum of five individuals with Parkinson’s disease (PD) or post-stroke and used wearable sensors to assess gait in real-world settings. Studies conducted solely in laboratory or rehabilitation environments, non-peer-reviewed articles, abstracts, or studies before 2014 were excluded. Results: Of 167 records identified, 34 studies were included, comprising 30 on PD (n=209; 812 [37%] female, 1359 [63%] male, mean age 68,59 years [SD 7,86]) and four on stroke (n=159; 77 [49%] female, 80 [51%] male, mean age 64,16 years [SD 10,51]). The meta-analysis for PD covered seven gait parameters with high heterogeneity across outcomes (I²>97%). Conclusions: Wearable sensors show strong potential for real-world gait assessment, but inconsistent methods call for standardization in sensor placement, algorithm validation, and metric definitions. Stroke populations are underrepresented, highlighting the need for targeted validation. Clinical Trial: This rapid review and meta-analysis was registered with PROSPERO, CRD42024531665.
Background: Xerostomia is a prevalent condition that negatively affects quality of life. Patients increasingly seek health-related information through online platforms such as YouTube. Given the growi...
Background: Xerostomia is a prevalent condition that negatively affects quality of life. Patients increasingly seek health-related information through online platforms such as YouTube. Given the growing role of social media in digital health communication, evaluating the reliability and quality of publicly accessible video content is essential. Objective: This study aimed to assess the reliability, quality, and content characteristics of YouTube videos related to xerostomia. Methods: In this cross-sectional study, a YouTube search was conducted on January 10, 2025, using the keyword “dry mouth.” The first 100 videos retrieved using the relevance filter were screened. After applying inclusion and exclusion criteria, 46 videos were included in the analysis. Video reliability was evaluated using the Modified DISCERN (mDISCERN) instrument, while quality was assessed using the Global Quality Score (GQS) and the Video Information and Quality Index (VIQI). Videos were further categorized as “useful” or “misleading”. Engagement metrics, including number of likes, views, comments, interaction index, and viewing rate, were recorded. Statistical analyses were performed using SPSS version 22.0, with significance set at P < .05. Results: A substantial proportion of videos demonstrated low reliability and quality. Approximately half of the included videos were classified as misleading. Useful videos had significantly higher mDISCERN, GQS, and VIQI scores compared with misleading videos (P < .05). In addition, useful videos showed significantly higher engagement metrics, including number of likes, views, comments, and viewing rate (P < .05). Positive correlations were observed between reliability and quality scores and engagement parameters. Conclusions: A considerable portion of YouTube videos on xerostomia contains low-quality or misleading information. Although higher-quality videos tend to receive greater user engagement, the presence of inaccurate content remains concerning. Increased involvement of healthcare professionals and academic institutions in producing evidence-based digital content may improve the quality of online health information. Clinical Trial: This cross-sectional study evaluated YouTube videos related to xerostomia. As the study analyzed publicly available data on an open-access platform and did not involve human participants or identifiable personal information, ethical approval was not required, consistent with previous similar studies.
Background: ICU-acquired weakness (ICU-AW) research focuses predominantly on intrinsic muscle pathology rather than integrated systemic interactions, commonly studied in exercise science. Peak oxygen...
Background: ICU-acquired weakness (ICU-AW) research focuses predominantly on intrinsic muscle pathology rather than integrated systemic interactions, commonly studied in exercise science. Peak oxygen uptake (V ̇O_2peak), V ̇O_2 on/off kinetics, and skeletal muscle oxygenation provide quantitative evaluation of exercise capacity, and are infrequently measured in ICU survivors. Routine cardiopulmonary exercise test (CPET) research separates V ̇O_2peak and V ̇O_2 kinetics assessments into multiple sessions. Yet, a combined experimental approach may enhance diagnosis, follow-up retention, and mechanistic insight for patients with ICU-AW. Objective: This prospective cross-sectional observational study aims to develop a standardized, single-session CPET protocol for combined assessment of V ̇O_2peak, V ̇O_2 kinetics, and skeletal muscle microvascular oxygenation in ICU survivors, enabling quantitative and integrated assessment of ICU-AW. Methods: Adults mechanically ventilated ≥ 7 days will be recruited to participate 6 months post-ICU discharge, in a modified CPET exercise session on an upright cycle ergometer. The proposed standardization will involve (i) the estimation of V ̇O_2peak using a priori formulae with a V ̇O_2peak correction factor for the ICU population, (ii) V ̇O_2 on-kinetics (constant-work-rate [CWR]) targeting relative 30% V ̇O_2reserve, (iii) an incremental ramp exercise based on self-reported functional status, and (iv) a 10-minute recovery to quantify V ̇O_2 off-kinetics. In addition, near-infrared spectroscopy (NIRS) will be placed on the vastus lateralis muscle to simultaneously collect tissue saturation index (TSI) and deoxy-hemoglobin (HHb) during each phase of the protocol. Results: Recruitment is anticipated to begin in June 2026, and is expected to be completed in 2028. Anticipated sample size will be approximately 25 participants based on convenience sampling and recruitment of 1 participant per month. Conclusions: The ICU-CARE CPET protocol will enable the quantitative evaluation of V ̇O_2peak, V ̇O_2 on/off kinetics, and local microvascular skeletal muscle oxygenation during a single exercise session, facilitating the integrated physiological study of ICU-AW. Clinical Trial: Registered at clinicaltrials.gov under the clinical study ID NCT06193980
Background: Shared psychotic disorder (folie a deux) is a represents a small subset of psychiatric disorders that is defined by the spread of the delusional beliefs of an index subject to a nearby se...
Background: Shared psychotic disorder (folie a deux) is a represents a small subset of psychiatric disorders that is defined by the spread of the delusional beliefs of an index subject to a nearby secondary person mostly in socially isolated dyadic units. Even though the disorder was previously outlined by Charles Lasègue and Jules Falret in the nineteenth century, it is still a diagnostic problem in the modern practice of clinical fields.
Case Presentation: Here we report about a mother-daughter dyad with rural Indian origin. The index patient was a 49-year-old female with a documented schizophrenia history (two years old) with noncompliance to antipsychotics only recently. She had persecutory delusion and auditory hallucinations in the third person. Her 29-year-old daughter with no prior history of psychiatric problems presented a month later with the same persecutory delusions against her father and brother, however, she had no hallucinations. Dominant and submissive relationship and prolonged social isolation were observed in the family.
Intervention and Outcome: There was a very little improvement in the daughter with initial therapeutic separation. At one week, the mother responded to olanzapine 10mg. The daughter needed to be prescribed a lower dose of olanzapine (2.5mg) and showed considerable changes in ten days.
Conclusion: The case highlights that there is a need to acknowledge shared psychosis in the socially isolated family systems and that pharmacological intervention is a supplementary treatment to separation that can be crucial in bringing the best possible recovery.
Background: Large language models (LLMs) are increasingly embedded in digital health applications and consumer-facing dietary guidance systems. While these systems offer scalable and personalized nutr...
Background: Large language models (LLMs) are increasingly embedded in digital health applications and consumer-facing dietary guidance systems. While these systems offer scalable and personalized nutrition support, inappropriate dietary recommendations may pose nutritional or behavioral risks, particularly for vulnerable populations with population-specific dietary constraints. However, systematic and scalable approaches for evaluating the safety of LLM-generated dietary recommendations remain limited. Objective: The objective of this study was to develop and evaluate a reproducible, population-aware auditing framework to quantify nutritional and behavioral risk in LLM-generated dietary recommendations across diverse user profiles, dietary goals, and response tones. Methods: We conducted a content-level audit of 2,464 dietary recommendations generated by a large language model using a full-factorial prompt design that varied user profiles, dietary goals, and response tones. Nutritional information, including daily energy intake and macronutrient distributions, was automatically extracted from generated texts. Population-specific nutritional thresholds derived from international guidelines were applied to assess nutritional risk. Behavioral risk was evaluated using a lexicon-based analysis of potentially unsafe dietary framings. Nutritional and behavioral components were integrated into a continuous composite risk score, enabling large-scale statistical analysis and subgroup comparisons. Results: Across all 2,464 recommendations, composite risk scores were generally low (median 0.008; mean approximately 0.02), indicating broad alignment with evidence-based nutritional thresholds. However, a pronounced long-tail distribution was observed. Elevated risk scores occurred disproportionately in sensitive populations, particularly pregnant individuals requiring glycemic control, with maximum observed values reaching approximately 0.17. Increased risk was driven by both population-specific nutritional deviations and the presence of potentially unsafe behavioral framings. Permissive response tones were associated with slightly higher risk levels than neutral, evidence-based tones. Conclusions: Most LLM-generated dietary recommendations appear nutritionally safe for general populations, but systematic long-tail risks persist for vulnerable groups. The proposed population-aware auditing framework enables scalable safety evaluation of generative dietary guidance and provides continuous risk signals that can support benchmarking, red-teaming, and the development of adaptive safeguards in digital health applications. Clinical Trial: Not applicable
Background: Glucose and lipid metabolism are critically linked to the health outcomes of children and adolescents. Exergaming interventions represent a promising approach to promote physical activity...
Background: Glucose and lipid metabolism are critically linked to the health outcomes of children and adolescents. Exergaming interventions represent a promising approach to promote physical activity engagement in this population. However, the effects of exergaming on glucose and lipid metabolism remain controversial. This systematic review aimed to synthesize and update the evidence on this topic. Objective: This meta-analysis aimed to evaluate the effects of exergaming on glucose and lipid metabolism in children and adolescents compared with control conditions, and to examine potential moderators of these metabolic outcomes. Methods: Following the PRISMA 2020 guideline, we searched PubMed, Web of Science, Scopus, Embase, and the Psychology and Behavioral Sciences Collection (EBSCO) from inception to October 2025. Standardized mean differences (Hedges g) were pooled using random-effects models. Subgroup analyses and meta-regression were conducted to examine potential moderators(eg, sex, age, BMI, and intervention type). Study quality was assessed using RoB 2, ROBINS-I and the PEDro scale, and the certainty of evidence was rated using the GRADE approach. Results: Ten trials (N=732) were included. Exergaming showed no significant pooled effects on glucose or insulin. For lipid outcomes, exergaming was associated with a small reduction in LDL-C (Hedges g −0.27, 95% CI −0.47 to −0.07; P=.008; I²=19%), whereas no significant overall changes were observed for TC, TG, or HDL-C. Exploratory subgroup and meta-regression analyses suggested that sex and intervention type may be associated with variability in effects, but these findings should be interpreted cautiously given the limited number of studies. The overall certainty of evidence was low. Conclusions: Exergaming may modestly reduce LDL-C in children and adolescents, but evidence does not support consistent improvements in other glycemic or lipid outcomes. Given the low certainty of evidence and limited data for effect modification, larger, well-designed trials with clearly reported exercise dose and metabolic endpoints are needed to confirm these findings and to identify subgroups most likely to benefit. Clinical Trial: OSF Registries 10.17605/OSF.IO/64FUS.
Background: The growing elderly population can directly impact countries’ productivity and pose significant challenges for governments, becoming a potential public health concern due to the increasi...
Background: The growing elderly population can directly impact countries’ productivity and pose significant challenges for governments, becoming a potential public health concern due to the increasing prevalence of health issues such as frailty, dementia, mobility limitations, and cardiovascular diseases. One promising approach is to integrate emerging technologies, such as wearable devices, machine learning, and smart sensors, to support older adults in their daily activities. These technologies can promote independence by enabling the safe execution of essential tasks while allowing continuous, 24-hour monitoring through mobile health systems. Objective: This research paper aims to evaluate the current use of consumer-grade wearable technologies, in combination with machine learning techniques, to promote autonomy and enhance daily activities among older adults. Methods: We conducted a systematic review in accordance with the Cochrane Handbook and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to synthesize evidence on the use of wearable technologies combined with artificial intelligence, particularly machine learning methods, to support daily living and prevent falls among older adults. The search was conducted in PubMed, MEDLINE, Scopus, the Institute of Electrical and Electronics Engineers (IEEE) Xplore, and the Association for Computing Machinery (ACM) Digital Library from their inception through April 2025, with no date or language restrictions. Results: Twenty-four studies were included. Mos studiest were observational or methodological and relied primarily on inertial sensing from wrist- or waist-mounted devices. The main application domains were activities of daily living monitoring, gait and mobility assessment, cognitive impairment, Parkinson’s disease symptoms, fall risk and detection, and frailty assessment. Classical machine learning models (e.g., support vector machines and random forests) and deep learning architectures (e.g., CNNs and LSTMs) were both widely used. However, studies were highly heterogeneous, frequently involved small samples, and rarely performed external validation or reported clinically actionable outcomes. Conclusions: Consumer-grade wearable devices, when combined with machine learning, show promise in supporting autonomy, daily activity monitoring, and fall-related safety in older adults. Nevertheless, the current evidence base is limited by methodological heterogeneity, small sample sizes, scarce external validation, and limited clinical integration. Future research should prioritize real-world evaluations, standardized reporting (e.g., TRIPOD-AI), interdisciplinary co-design, and patient-centered outcomes to enable translation into routine care. Clinical Trial: International Prospective Register of Systematic Reviews (PROSPERO) Registration: CRD420251044449
Background: Most studies on internet use and health outcomes among older adults rely on cross-sectional designs and binary exposure measures. It is usually difficult for time to capture multidimension...
Background: Most studies on internet use and health outcomes among older adults rely on cross-sectional designs and binary exposure measures. It is usually difficult for time to capture multidimensional health-related digital engagement. The high collinearity between digital engagement and socioeconomic factors makes it challenging to disentangle independent effects from marker effects. Currently, longitudinal evidence linking health-related digital engagement to incident stroke remains limited. Objective: This study aimed to examine the longitudinal association between a composite Health-Related Digital Engagement Index (HDEI) and incident stroke among community-dwelling older adults. Bisides, it sought to quantify the extent to which socioeconomic factors account for this association. Methods: This prospective cohort study used data from the National Health and Aging Trends Study (NHATS), Waves 1-10 (2011-2020). The HDEI (range 0-4) was constructed from 4 health-related internet behaviors at baseline. The primary outcome was incident stroke ascertained by self- or proxy-reported physician diagnosis. Discrete-time hazard models with a complementary log-log link were fitted with 4 nested models progressively adjusting for demographics, socioeconomic factors, chronic disease burden, disability, and social isolation. Results: Among 5,384 participants (81.6% HDEI=0; 10.5% HDEI=1; 7.9% HDEI≥2) followed for a median of 5 years (IQR 2-9), 470 incident stroke events occurred. In the unadjusted model, each 1-point HDEI increase was associated with 24% lower stroke risk (hazard ratio [HR] 0.76, 95% CI 0.66-0.88; P<.001). After adjustment for age and sex, the association attenuated but remained significant (HR 0.82, 95% CI 0.71-0.94; P=.006). Upon further adjustment for race or ethnicity, education, and income, the association was no longer significant (HR 0.91, 95% CI 0.78-1.05; P=.18); full adjustment yielded similar results (HR 0.91, 95% CI 0.79-1.04; P=.18). Subgroup analyses showed a stronger association among men (HR 0.70, 95% CI 0.55-0.89; P=.003), though no interaction terms reached significance. Sensitivity analyses excluding early events and substituting cellphone use as an alternative exposure yielded consistent attenuation patterns. Sensitivity analyses excluding early events and using cell phone instead as a alternative exposure variable showed a similar attenuation patterns. Conclusions: In unadjusted and sociodemographic-adjusted models, higher health-related digital engagement was associated with lower stroke incidence. However, after adjusting for socioeconomic factors, this relationship was reduced. The observed association between digital engagement and stroke risk seems to be predominantly confounded by socioeconomic advantage. Therefore, digital health interventions those aiming at stroke prevention should address both the digital divide and the underlying socioeconomic determinants of cerebro-cardiovascular risk.
Background: The COVID-19 pandemic triggered an abrupt transition to virtual rehabilitation across physiotherapy, occupational therapy, and respiratory therapy. While telerehabilitation research has do...
Background: The COVID-19 pandemic triggered an abrupt transition to virtual rehabilitation across physiotherapy, occupational therapy, and respiratory therapy. While telerehabilitation research has documented feasibility and patient satisfaction, less is known about how professionals navigated the destabilization and reassembly of care practices during this transformation. Existing literature frames virtual care as a technical substitution for in-person services, overlooking the deeper reconfiguration of the socio-technical networks that organize therapeutic work. Objective: Applying actor-network theory (ANT), we examined how rehabilitation professionals reconfigured their practices through technology during the first year of the pandemic. We explored how digital tools, domestic spaces, and new sensory practices reshaped therapeutic presence, professional identity, and the environments in which care was enacted. Methods: We conducted a secondary analysis of longitudinal diary-interview data collected from 16 Canadian rehabilitation professionals (occupational therapists, physiotherapists, and respiratory therapists) working in community-based primary care in Ontario and Manitoba (2020-2021). Participants recorded audio diaries over 12 weeks and completed two follow-up interviews. Analysis followed an interpretive approach informed by Science and Technology Studies, tracing how human and technological actors were enrolled, adapted, and redefined within emerging care assemblages. Results: Three interconnected processes characterized the reconfiguration of rehabilitation: (1) technology as active participant, where digital platforms mediated rather than merely transmitted therapeutic reasoning and clinical decision-making; (2) reconfiguration of therapeutic presence, as sensory attention and embodiment were redistributed across screens, sounds, and new forms of spatial choreography; and (3) enrollment of domestic spaces as clinical environments, as clinicians' and patients' homes became sites of care shaped by new ethical, material, and relational dynamics. These processes reveal that virtual rehabilitation constituted a new form of care co-produced by humans, technologies, and spaces rather than a digitized replication of traditional practice. Conclusions: The pandemic exposed rehabilitation as a socio-technical practice sustained through the coordination of multiple actors rather than professional expertise alone. Virtual care redefined therapeutic presence when traditional boundaries between clinical and domestic, human and technological, were blurred. Recognizing virtual care as a distinct modality underscores the need to integrate technology-mediated competencies into rehabilitation education and practice. Future research should incorporate patient perspectives and direct observation to trace how these care networks evolve.
Background: The aging population has become a rapidly expanding user base in smart hospital outpatient departments, posing a significant challenge. Their lower familiarity with digital technology, tog...
Background: The aging population has become a rapidly expanding user base in smart hospital outpatient departments, posing a significant challenge. Their lower familiarity with digital technology, together with inherent device design flaws, hinders the overall satisfaction among older patients. Objective: To better understand these potential barriers and promote equitable access to digital healthcare for this demographic, this study examined user satisfaction of self-service kiosks and its influencing factors among older patients. Methods: A cross-sectional study was conducted among 240 older outpatients recruited from a tertiary hospital in Beijing from July to September 2025. Using a 26-item questionnaire and based on the DeLone and McLean IS Success Model (D&M IS Success Model), we employed statistical description and ordinal logistic regression to analyze the determinants of user satisfaction, which were visualized via a forest plot. Results: Results of this study showed that user satisfaction exhibited significant positive associations with both information quality and interface quality among older patients. Our findings are in line with the D&M IS Success Model. Additionally, the role of seeking assistance from fellow patients as a marginally significant predictor of user satisfaction merits further consideration. Conclusions: The aged-friendly optimization of self-service kiosks, particularly in information quality and interface quality, serves as a cornerstone for equitable digital healthcare and enhanced patient satisfaction within smart services. Furthermore, interpersonal peer support may act as a critical driver in promoting the utilization of self-service kiosks among older patients.
Background: Mental rotation, the ability to mentally transform visuospatial representations, supports everyday spatial behaviors (e.g., navigation) and can be vulnerable in later life. Older adults wi...
Background: Mental rotation, the ability to mentally transform visuospatial representations, supports everyday spatial behaviors (e.g., navigation) and can be vulnerable in later life. Older adults with mild cognitive impairment (MCI) often show greater difficulties in visuospatial processing than cognitively unimpaired peers, including lower accuracy and higher variability in mental rotation tasks. Because MCI represents a prodromal stage associated with elevated risk for subsequent dementia, the critical period occurring before rapid cognitive decline to dementia, it may be an important window for interventions that target specific cognitive vulnerabilities. In non-amnestic forms of MCI (MCI-NA), visuospatial and/or executive deficits can be prominent, and longitudinal outcomes are heterogeneous, varying in part by underlying neuropathology. Accordingly, interventions that are explicitly designed to engage visuospatial processes relevant to MCI-NA may be a useful, deficit-targeted approach to evaluate in feasibility studies and to inform future controlled trials for cognitive training programs aiming to prolong daily functioning and reduce suffering. Objective: This study aims to evaluate the feasibility and acceptability of the Virtual Reality-Visuospatial Cognitive Training (VR-VCT) program in older adults with MCI-NA and to estimate preliminary within-subject changes in visuospatial cognition to inform a future randomized trial. Participants (n=40) will meet eligibility criteria consistent with commonly used definitions of MCI-NA, including subjective cognitive concerns, preserved basic activities of daily living, absence of dementia, and objective impairment on standardized measures emphasizing visuospatial and/or executive functioning. This study aims to: (1) quantify the feasibility and acceptability of VR-VCT in older adults with MCI-NA, and (2) estimate preliminary within-subject change on visuospatial cognitive outcomes following VR-VCT. Methods: 40 Participants with MCI-NA will be enrolled in a structured VR Cubism program using the Meta Quest 3 VR headset. The intervention will involve three 30-minute sessions per week for 12 weeks, with tasks progressing in difficulty over time. Cognitive and visuospatial outcomes will be assessed at baseline (T0), immediately post-intervention (T1), and at follow-up (T2; 12 weeks post-intervention) to evaluate whether observed changes are maintained. Global cognition will be assessed using the Montreal Cognitive Assessment (MoCA). Visuospatial construction will be assessed using the Wechsler Adult Intelligence Scale (WAIS) Block Design subtest, and mental rotation will be assessed using the Vandenberg Mental Rotation Test (VMRT). Changes in performance across time points will be analyzed using repeated-measures models (e.g., linear mixed models) to estimate within-subject change, with effect sizes and confidence intervals reported to inform future controlled trials. Results: Participants will be recruited from local assisted living facilities, memory care settings, and community outreach programs. This study has been approved by the University of Utah School of Medicine Institutional Review Board. The data collection will be started in March 2026. Data analysis is anticipated to be concluded by August 2026. Conclusions: The findings will inform the study design, outcome measurement, and power calculations of a future randomized controlled trial. If feasible and acceptable, VR-VCT may represent a scalable, engaging, and deficit-targeted intervention approach with the potential to support visuospatial cognitive functioning during a critical window prior to dementia onset.
Background: eHealth literacy is widely assumed to drive how people seek health information online—yet this assumption rests on a body of evidence that has never been quantitatively synthesized acros...
Background: eHealth literacy is widely assumed to drive how people seek health information online—yet this assumption rests on a body of evidence that has never been quantitatively synthesized across population groups or examined against the algorithmically mediated environments in which today's users actually navigate health content. Contemporary measurement of eHealth literacy relies heavily on the eHealth Literacy Scale (eHEALS), a tool calibrated to Web 1.0-era search behaviors. Whether eHEALS retains predictive validity for online health information seeking behavior (OHIS) across generationally distinct cohorts—particularly digital natives who acquire health knowledge through curated feeds and short-video platforms rather than deliberate search—remains an open and consequential question. Objective: This study aims to quantify the strength and heterogeneity of the association between eHealth literacy and OHIS, and to identify boundary conditions across generation, morbidity status, and information source credibility. Methods: Following PRISMA guidelines, we searched PubMed, Embase, Web of Science Core Collection, PsycINFO, and Library, Information Science & Technology Abstracts for studies published through April 28, 2025. Eligible studies enrolled individual-level participants, assessed eHealth literacy with validated instruments, and measured active OHIS. Two independent reviewers extracted data and appraised study quality using the modified Newcastle-Ottawa Scale. Pearson r values were transformed to Fisher's z and pooled under a random-effects model; moderator analyses were performed for the three prespecified subgroups. Results: Of 8,090 nonduplicate records, 30 studies entered the qualitative synthesis and 18 (19 independent effect sizes) the meta-analysis. The overall pooled correlation was r = 0.30 (95% CI 0.18–0.41; P < .001), indicating a small-to-moderate association. Subgroup analyses revealed a strikingly uneven pattern: among non-Gen Z participants the correlation was r = 0.39, whereas in Gen Z it was near zero (r = 0.08)—suggesting that eHEALS-measured literacy is largely disconnected from how this cohort seeks health information. The association was substantially stronger among patients than nonpatients (r = 0.56 vs. r = 0.23) and for professional versus nonprofessional sources (r = 0.38 vs. r = 0.26). No significant publication bias was detected (Egger's test, P = .38). Conclusions: The near-zero eHealth literacy–OHIS association in Gen Z is the study's most consequential finding: it indicates that eHEALS has limited predictive validity for a generation that navigates health content through algorithmically curated feeds, short-video platforms, and AI-assisted interfaces rather than deliberate keyword search. Interpreted through a Motivation-Ability-Opportunity lens, perceived ability no longer constrains seeking behavior in digital natives—motivational activation and platform affordances do. These findings challenge the field to move beyond self-report confidence measures toward platform-sensitive, performance-based instruments, and call for intervention designs that pair literacy skills with motivational and environmental cues rather than treating literacy as a standalone determinant of health information behavior.
Background: Herpes zoster (HZ) imposes a substantial disease burden, yet vaccine uptake remains suboptimal in China. While eHealth literacy is a known determinant of health behaviors, its role in brid...
Background: Herpes zoster (HZ) imposes a substantial disease burden, yet vaccine uptake remains suboptimal in China. While eHealth literacy is a known determinant of health behaviors, its role in bridging socioeconomic disparities and its varying impact across different age groups of vaccine-eligible adults remain understudied. Specifically, it is unclear whether eHealth literacy acts as a "compensatory resource" for disadvantaged populations and if the digital skills required to reduce hesitancy differ between middle-aged and older adults. Objective: This study aimed to examine the association between eHealth literacy and HZ vaccine hesitancy among adults aged 40 years and older in Shanghai, China, with a specific focus on identifying age-dependent "digital thresholds" and the compensatory effect of literacy on socioeconomic status (SES). Methods: A community-based cross-sectional study was conducted from October to December 2022 across three districts in Shanghai. A total of 1302 adults aged ≥40 years were recruited via convenience sampling. eHealth literacy was assessed using the eHealth Literacy Scale (eHEALS). Multivariable logistic regression models were used to analyze the associations, adjusting for sociodemographic characteristics, health status, and behaviors. Stratified analyses were performed to evaluate interactions among literacy, age, and SES. Results: The prevalence of HZ vaccine hesitancy was 88.2% (1149/1302). In the fully adjusted model, participants with medium (odds ratio [OR] 0.538, 95% CI 0.326-0.886; P=.015) and high (OR 0.472, 95% CI 0.264-0.844; P=.011) eHealth literacy demonstrated significantly lower odds of hesitancy compared to those with low literacy. Age-stratified analyses revealed a distinct "digital threshold" effect: for middle-aged adults (40–59 years), medium literacy was sufficient to significantly reduce hesitancy (OR 0.501, 95% CI 0.265-0.949; P=.034), whereas older adults (≥60 years) required high literacy to achieve a significant protective effect (OR 0.347, 95% CI 0.136-0.882; P=.026). Crucially, eHealth literacy exhibited a strong compensatory effect for socioeconomic disadvantage. Among participants with low SES, high eHealth literacy was associated with an 83.1% reduction in the odds of hesitancy (OR 0.169, 95% CI 0.054-0.528; P=.002), a magnitude of effect not observed in higher SES groups. Additionally, a history of HZ infection was identified as a robust protective factor (OR 0.473, 95% CI 0.309-0.724; P=.001). Conclusions: eHealth literacy serves as a critical compensatory resource that can mitigate the disadvantage of low socioeconomic status in HZ vaccine acceptance. However, the protective mechanism is age-dependent, indicating a higher "digital threshold" for older adults (≥60 years) compared to their middle-aged counterparts. Public health interventions should prioritize digital empowerment for low-SES populations and tailor educational strategies to meet the higher digital competency needs of older adults. Clinical Trial: Not available
Background: Depression and anxiety are common mental disorders across all age groups. Digital intelligent interventions have not only overcome the time and space limitations of traditional psychothera...
Background: Depression and anxiety are common mental disorders across all age groups. Digital intelligent interventions have not only overcome the time and space limitations of traditional psychotherapy but also provided innovative pathways for treating these conditions. However, the specific effectiveness of such interventions among groups with different demographic characteristics remains to be further clarified. Objective: To evaluate the effectiveness of digital intelligence interventions on symptoms of depression and anxiety using meta-analytic methods Methods: We searched the PubMed, Embase, Cochrane Library, Web of Science, and BIOSIS databases from inception through June 2025 for randomized controlled trials (RCTs) of digital interventions targeting depression or anxiety. Two reviewers independently screened the studies, extracted the data, and assessed the risk of bias using the Cochrane Risk of Bias tool. Meta-analyses were performed using RevMan 5.4 and Stata 15.0. Standardized mean differences (SMDs) with 95% confidence intervals (CIs) were used to assess continuous outcomes. Heterogeneity and subgroup analyses were performed. Results: Nineteen RCTs involving 4,679 participants were included. Compared with controls, digital interventions significantly reduced depressive symptoms (SMD = −0.25; 95% CI, −0.41 to −0.09; P = .002) and anxiety symptoms (SMD = −0.20; 95% CI, −0.32 to −0.08; P = .0009). Subgroup analysis by intervention duration indicated the largest effect for depressive symptoms at approximately 4 weeks (SMD = −0.26; 95% CI, −0.40 to −0.11; P = .0006). The greatest reduction in anxiety symptoms was observed at 5–8 weeks (SMD = −0.22; 95% CI, −0.47 to 0.03; P = .08), though this did not reach statistical significance.App-based interventions demonstrated the most significant effects on depression and anxiety, with effect sizes of (SMD=-0.44 (95% CI: -0.82, -0.06; P = .02) ;SMD= -0.36 (95% CI: -0.59, -0.12; P = .003), respectively. Furthermore, therapeutic efficacy was superior among the older adult population, showing values of SMD= -0.37 (95% CI: -0.64, -0.09; P = .009) ;SMD= -0.51 (95% CI: -0.87, -0.14; P = .006). Conclusions: Nineteen RCTs involving 4,679 participants were included. Compared with controls, digital interventions significantly reduced depressive symptoms (SMD = −0.25; 95% CI, −0.41 to −0.09; P = .002) and anxiety symptoms (SMD = −0.20; 95% CI, −0.32 to −0.08; P = .0009). Subgroup analysis by intervention duration indicated the largest effect for depressive symptoms at approximately 4 weeks (SMD = −0.26; 95% CI, −0.40 to −0.11; P = .0006). The greatest reduction in anxiety symptoms was observed at 5–8 weeks (SMD = −0.22; 95% CI, −0.47 to 0.03; P = .08), though this did not reach statistical significance.App-based interventions demonstrated the most significant effects on depression and anxiety, with effect sizes of (SMD=-0.44 (95% CI: -0.82, -0.06; P = .02) ;SMD= -0.36 (95% CI: -0.59, -0.12; P = .003), respectively. Furthermore, therapeutic efficacy was superior among the older adult population, showing values of SMD= -0.37 (95% CI: -0.64, -0.09; P = .009) ;SMD= -0.51 (95% CI: -0.87, -0.14; P = .006). Clinical Trial: CRD420251069160
Background: Home spirometry has been widely adopted in the delivery of cystic fibrosis (CF) care. While existing literature largely supports its feasibility and positive outcomes, behaviour around hom...
Background: Home spirometry has been widely adopted in the delivery of cystic fibrosis (CF) care. While existing literature largely supports its feasibility and positive outcomes, behaviour around home disease monitoring remains poorly understood. Objective: This study aimed to evaluate healthcare professionals’ (HCPs') ability to estimate home spirometry usage pwCF and compare these with actual recorded data. Methods: Home spirometry data, from a single large adult CF centre, for the year 2024, was obtained from NuvoAir. HCPs (doctors, nurses, and physiotherapists) rated their familiarity with each pwCF and categorised them as infrequent, expected, or highly frequent spirometry users. They were also asked to estimate spirometry usage as an open-ended numerical response. CF experience was defined by the number of years the HCP had worked at the centre. Estimation accuracy was assessed using mean bias and mean absolute error (MAE). Results: 10 doctors (35.7%), 6 nurses (21.4%), and 12 physiotherapists (42.9%) responded to the survey, with an overall response rate of 96.6%. There were 790 completed categorical estimates and 794 numerical estimates. The mean (±SD) CF experience was 15.7 (±8.2) years. Across all roles, HCPs systematically underestimated home spirometry usage (mean bias -4.9; MAE 6.32). No significant differences in estimation accuracy were observed based on professional role, reported familiarity or CF experience. Conclusions: This study found that CF caregivers tend to underestimate home spirometry usage, in contrast to other studies showing they often overestimate treatment adherence. This highlights gaps in understanding behaviour in pwCF and the need for CF teams to adapt to evolving models of remote monitoring.
Background: People living with chronic diseases increasingly rely on online sources to support ongoing self-management. While digital environments expand access to health information, they also expose...
Background: People living with chronic diseases increasingly rely on online sources to support ongoing self-management. While digital environments expand access to health information, they also expose patients to misinformation of varying credibility. Prior studies have largely described information-seeking behaviours or misinformation exposure separately, with limited integration of verification processes. Objective: This study examined the interrelationships between online health information seeking (HIS), verification behaviour (VER), and misinformation-related perceptions (MIS) among individuals with chronic diseases using a behaviourally integrated framework. Methods: A cross-sectional online survey was conducted among adults with self-reported chronic diseases. The questionnaire assessed health information seeking, verification practices, and perceptions of health misinformation using Likert-scale measures. Data were analysed using descriptive statistics, visual analytics, and structural equation modelling (SEM) to evaluate direct and indirect associations between constructs. Results: Participants reported frequent engagement with online health information and widespread use of verification strategies. SEM analysis revealed a strong positive association between HIS and VER (β = 0.81), indicating that active information seeking was closely linked to credibility assessment behaviours. HIS was positively associated with MIS (β = 0.41), suggesting that greater engagement increased awareness of misleading content. VER demonstrated a modest negative association with MIS (β = −0.29), consistent with a buffering effect whereby verification practices partially attenuated misinformation-related perceptions. Conclusions: Findings support a mechanistic interpretation in which online health information seeking promotes verification behaviour, and verification practices may mitigate the perceived impact of misinformation. These results extend beyond descriptive accounts by demonstrating how information-seeking and evaluative behaviours interact within misinformation-rich digital environments. Interventions that reinforce verification strategies and embed credibility cues within commonly used platforms may strengthen safe digital health engagement among chronic disease populations
In February 2025, Andrej Karpathy introduced vibe coding—building software by describing intent in natural language rather than writing precise code. This concept captured a broader paradigm shift:...
In February 2025, Andrej Karpathy introduced vibe coding—building software by describing intent in natural language rather than writing precise code. This concept captured a broader paradigm shift: from prompt engineering toward context engineering, where the richness of context supplied to artificial intelligence (AI) determines output quality more than the precision of commands. We propose that the same principle applies to health. Vibe Health is a dual-sided paradigm in which individuals reach actionable health decisions through honest, iterative conversations with AI—without requiring medical knowledge or prompt expertise. The term deliberately extends Karpathy’s metaphor: just as vibe coding showed that programming skill matters less than the ability to articulate intent, Vibe Health posits that medical knowledge matters less than the ability to describe what is happening in one’s body.
On the patient side, the core principle is that an honest prompt outperforms a perfect prompt: candid descriptions of symptoms, emotions, and lived context generate more useful AI responses than technically polished queries. On the physician side (Vibe Clinical), we reframe doctors not as novice prompt engineers but as the most experienced context engineers in any professional domain—their history-taking, physical examination, and clinical reasoning skills are precisely the context injection capabilities that enable high-quality AI interaction.
We introduce the FTCAV model (Feel–Tell–Converse–Act–Verify) as an integrated behavioral framework that operationalizes Vibe Health for both patients and physicians. The model is grounded in interoception research—specifically the distinction between interoceptive accuracy (detecting bodily signals) and interoceptive awareness (reporting them)—and extends health behavior theory (the Capability–Opportunity–Motivation–Behavior model) to the conversational dynamics of AI-mediated health interactions. Each stage represents a discrete behavioral step: sensing a bodily signal or clinical cue (Feel), expressing it in natural language or structured clinical terms (Tell), refining understanding through multiturn AI dialogue (Converse), converting insight into executable action (Act), and confirming with appropriate authority (Verify).
We examine the emerging medicolegal implications of AI-mediated health conversations, arguing that patients’ timestamped, contextualized AI dialogue logs carry evidentiary weight that physicians cannot safely ignore. We call for three specific actions: incorporation of Vibe Health principles into patient-facing AI platforms and health education programs, piloting of Vibe Clinical modules in medical school curricula, and development of professional guidelines for the documentation and clinical integration of patients’ AI conversation records.
Background: Perioperative respiratory care is a multidisciplinary process that includes several sequential steps and handoffs. Variability and inefficiencies within this workflow may delay care delive...
Background: Perioperative respiratory care is a multidisciplinary process that includes several sequential steps and handoffs. Variability and inefficiencies within this workflow may delay care delivery and increase the workload of clinical staff. Quality Control Circles (QCCs) have been widely used in health care as a practical approach to addressing process-related quality problems identified in clinical practice. Objective: The aim of this study was to assess whether a Quality Control Circle–based intervention could improve workflow efficiency and care consistency in perioperative respiratory care. Methods: We conducted a before-and-after time–motion study on 30 perioperative respiratory care episodes to compare workflow before and after the implementation of a quality control circle. Recorded variables were: (1) total time from physician order to completion of respiratory care, (2) patient waiting time for incentive spirometry preparation, and (3) time clinical staff spent on patient education and respiratory training. Eligible patients were those prescribed perioperative respiratory care before or after surgery; those with prior exposure to the intervention or with hearing impairment were excluded. Guided by the Plan–Do–Check–Act cycle, improvement strategies included standardizing the provision of incentive spirometry, pre-positioning equipment at nursing stations, unifying education content, and delivering multimedia educational materials via quick response codes. Results: Before quality control circle implementation, the total process time was 266.65 minutes (240 minutes for equipment preparation and 17.5 minutes for patient education). After implementation, it dropped to 28.75 minutes (8.7 minutes for preparation, 10.5 minutes for schooling), improving overall efficiency by 89.2% and significantly reducing workflow time. Conclusions: A quality control circle framework not only optimized perioperative respiratory care but also engaged frontline staff, fostering a sense of teamwork and shared purpose. Multimedia patient education improved understanding and engagement, and cross-disciplinary collaboration reduced clinical workload. This strategy may reduce postoperative pulmonary complications and can be applied to other respiratory care workflows.
Background: During a public health emergency, interest in unsafe or illegitimate medications can delay appropriate treatment and foster medical mistrust. Objective: Our study investigated the possibil...
Background: During a public health emergency, interest in unsafe or illegitimate medications can delay appropriate treatment and foster medical mistrust. Objective: Our study investigated the possibility of bidirectional relationships between online search interest and media coverage to evaluate exposure to and access of health-related information during the COVID-19 pandemic. Specifically, we examined: 1) the extent to which US news sources covered supposed COVID-19 treatments, 2) the extent of public interest in these treatments, as reflected by online search interest, and 3) the relationship between these data sources within the US. Methods: We obtained daily US-based Google Search Trends and Media Cloud data from 2019-2022 to assess the relationship between search interest and media coverage in three purported COVID-19 treatments: hydroxychloroquine, ivermectin, and remdesivir. Results: Search interest and media coverage of all COVID-19 treatments were significantly elevated during the study period; search interest was highest for ivermectin (6.0 out of 100; interquartile range [IQR]: 1.9-9.9), while media covered hydroxychloroquine most frequently (0.05% of all articles published; IQR: 0.02-0.13%). Anomaly detection of both data sources identified several points of higher-than-expected activity; anomalies in search interest and media coverage showed similar patterns within treatments. There were distinct patterns of media coverage – while the plurality of sources for all treatments were considered “Left” or “Left Leaning”, ivermectin was covered by the highest number of “Right”-biased sources and remdesivir had the highest coverage by “Center” or “unbiased” sources. When assessing the co-occurrence of words and phrases in media sources covering each of the treatments, there were distinct qualitative difference in the categories of words appearing alongside the drugs. Specifically, ivermectin appeared to be reported more frequently in association with specific individuals than media mentioning the other drugs. In google searches, people seemed most interested in understanding what hydroxychloroquine is and the uses of ivermectin (e.g., “for humans”, “for dogs”). We found significant associations between media coverage and search interest for all three treatments. Media coverage had the strongest impact on same-day search interest for remdesivir (199.7% increase, 95% CI: 179.2, 221.6) and hydroxychloroquine (182.6% increase, 95% CI: 172.8, 192.7); interest dropped significantly 1 and 2 days after media coverage of these treatments. Interest in ivermectin was lower overall (105.0% increase, 95% CI: 97.9, 112.3) but stayed elevated even 2 days after media coverage. When evaluating the separate impact of search interest on media coverage, the associations were much weaker for all three treatments and all lagged conditions. Conclusions: During a public health emergency, the information that populations access can directly influence health-seeking behaviors, with potentially life-threatening consequences. More broadly, positive media coverage of unsafe or unapproved medications can deter individuals from trusting and accessing safe alternatives that are more likely to be efficacious in preventing disease progression. Given the strong association between treatment-related news media coverage and public interest in said treatments, our results suggest that news media serve as a powerful mechanism for experts to inform the landscape of public opinion and to reach audiences during future public health emergencies.
Large language models (LLMs) are increasingly integrated into everyday health-care communication, moving beyond experimental evaluation into routine clinical and informational use. Early research has...
Large language models (LLMs) are increasingly integrated into everyday health-care communication, moving beyond experimental evaluation into routine clinical and informational use. Early research has primarily focused on technical performance, including accuracy, validation, and bias mitigation. While these remain essential, the transition to sustained real-world integration raises additional ethical questions that cannot be addressed by model-level evaluation alone.
This Viewpoint proposes an adoption-phase ethics perspective, emphasizing how ethical risks shift as LLMs become embedded within institutional workflows, professional practices, and relationships of care. Drawing on normative analysis informed by existing empirical discussions, we examine three interrelated domains: trust, responsibility, and equity. During routine use, trust becomes shaped not only by perceptions of accuracy but also by expectations regarding accountability, transparency, and institutional protection. Responsibility may become diffused or ambiguous when LLM-mediated information influences clinical communication without clearly specified oversight. At the same time, differential digital literacy and access to institutional support may create uneven capacity to interpret and benefit from AI-generated information.
We argue that ethical governance must therefore extend beyond pre-deployment technical safeguards toward sustained, system-level oversight. Adoption should be understood as a dynamic ethical process requiring role-sensitive design, clear accountability structures, and equity-oriented implementation. By reframing ethical attention from experimental validation to governance during routine integration, health-care systems can better ensure that the growing presence of LLMs supports fairness, responsibility, and patient trust alongside technical advancement.
Background: Greater caregiver knowledge of early language development may support more reciprocal interactions with infants and improved child language outcomes. Shortform video interventions on socia...
Background: Greater caregiver knowledge of early language development may support more reciprocal interactions with infants and improved child language outcomes. Shortform video interventions on social media platforms like Instagram and TikTok are a scalable, cost effective way to increase caregiver knowledge. Objective: The objective of this investigation is to describe the development and testing of a version of the BabyTok Project, a social media based intervention for caregivers of very young infants under three months of age. Methods: We used a mixed method, pre-post quasi experimental design to examine the feasibility, acceptability, and initial evidence of effectiveness of the BabyTok Project through a knowledge measure called the SPEAK-R. Results: Engagement metrics, participants’ post-intervention surveys, and feedback support the feasibility and acceptability of the intervention. Statistically significant scores on the SPEAK-R offer initial indications of change in caregiver knowledge. Conclusions: The BabyTok Project has multiple demonstrations of its feasibility and acceptability and shows promise as a means to increase caregiver knowledge of early infant communication development and strategies to enhance children's learning. Interventions like the BabyTok Project could be used as a population-wide support to promote optimal child language outcomes.
Background: The COVID-19 pandemic accelerated the adoption of technology-mediated mental health services, yet questions remain about whether immersive digital platforms can match the therapeutic effec...
Background: The COVID-19 pandemic accelerated the adoption of technology-mediated mental health services, yet questions remain about whether immersive digital platforms can match the therapeutic effectiveness of traditional face-to-face therapy. Virtual Reality (VR) offers unique affordances beyond conventional telehealth by providing embodied presence and shared virtual spaces, potentially addressing limitations of video-based teletherapy. However, empirical evidence directly comparing VR-mediated therapy with in-person sessions using both subjective and objective measures remains scarce. Objective: This study aimed to compare therapeutic engagement, self-disclosure, and emotional arousal between VR-mediated and face-to-face initial counselling sessions using a multi-methods approach combining self-report measures, qualitative participant feedback, and continuous physiological monitoring. Methods: We conducted a within-subjects experimental study with 30 adult participants (19 male, 11 female; mean age 32.5 years, SD 12.1) who each completed one VR-based and one face-to-face counselling session with licensed clinical psychologists. Sessions were counterbalanced and followed a semi-structured protocol. The VR condition used Oculus Quest 2 headsets with avatar-mediated interaction in a virtual counselling environment. We collected subjective data via validated instruments including the Session Evaluation Questionnaire (SEQ), Working Alliance Inventory-Short Form (WAI-SF), and custom engagement scales. Qualitative data were collected through open-ended written feedback and analysed using thematic analysis. Physiological data included continuous heart rate measured via photoplethysmography and electrodermal activity (EDA) recorded throughout each session. Statistical analyses employed paired t-tests and Wilcoxon signed-rank tests for within-subject comparisons. Results: While participants initially rated face-to-face sessions as more appropriate (mean 7.63, SD 1.42 vs mean 6.80, SD 1.98; Z=2.541, P<.05) and reported feeling better immediately after in-person sessions (mean 7.00, SD 1.83 vs mean 6.03, SD 2.31; Z=2.585, P<.05), there were no significant differences in willingness to continue therapy (P>.05) or recommendation likelihood (P>.05) between modalities. Qualitative analysis revealed that 73% of participants reported greater self-disclosure in VR sessions, with thematic analysis identifying that avatar-mediated interaction reduced social anxiety and facilitated openness, particularly through the psychological distance and reduced self-consciousness afforded by virtual representation. Physiological measures showed no significant differences in heart rate (mean 85.67, SD 12.85 vs mean 83.12, SD 12.45; P>.05) or skin conductance levels (mean 9.47, SD 3.67 vs mean 9.18, SD 3.54; P>.05) between conditions, indicating comparable emotional arousal during initial therapeutic encounters. Therapist-rated alliance scores were equivalent across modalities (P>.05). Conclusions: VR-mediated therapy achieved therapeutic engagement levels comparable to face-to-face sessions during initial encounters, with unique advantages for facilitating self-disclosure among certain clients. While traditional therapy remains preferred for immediate comfort, VR demonstrates viability as a complementary digital mental health intervention. These findings support the integration of immersive technologies in mental healthcare delivery, particularly for populations who may benefit from the psychological distance afforded by avatar-mediated interaction. Future research should explore optimal client-technology matching and long-term therapeutic outcomes in VR-delivered interventions.
Background: In Nigeria, children with Autism Spectrum Disorder (ASD) often miss out on early intervention due to a massive shortage of specialists and deep-seated cultural stigma. Pre-primary teachers...
Background: In Nigeria, children with Autism Spectrum Disorder (ASD) often miss out on early intervention due to a massive shortage of specialists and deep-seated cultural stigma. Pre-primary teachers are ideally positioned to act as "first detectors," yet they frequently lack the professional confidence and digital tools required to navigate the screening process Objective: This study set out to see if a custom-built tool called Ayàtọ̀—a responsive web-based system, could bridge this gap. We wanted to know if providing teachers with "digital scaffolding" would improve their knowledge, their confidence (self-efficacy), and their attitudes toward including autistic children in their classrooms. Methods: We conducted a quasi-experimental study with 128 pre-primary teachers in Lagos who completed the full intervention. Half the group used the Ayàtọ̀ platform alongside clinical-pedagogical training to assist in the screening process, while the other half served as a control. We tracked their progress using pre-test and post-test assessments, focusing on how the digital tool supported their ability to identify ASD traits in real-time. Results: Significant improvements were observed across all outcomes. Teachers in the intervention group demonstrated higher adjusted post-test scores compared to controls: knowledge (M = 6.82 vs. 2.72, F = 135.35, p < .001, partial η² = 0.520), self-efficacy (M = 32.39 vs. 27.49, F = 79.37, p < .001, partial η² = 0.388), and attitudes (M = 56.90 vs. 44.23, F = 26.09, p < .001, partial η² = 0.173). The paired samples t-tests showed the intervention group made large improvements in all three areas (p < .001), indicating that the combination of a digital platform and structured training significantly increased teachers' knowledge, confidence and positive attitudes towards children with ASD. Conclusions: Our findings show that we don't always need more specialists to close the diagnostic gap; we need better tools for the educators we already have. Àyàtọ̀ proves that a "frugal," device-agnostic web system can turn a regular classroom teacher into a capable ally for early autism identification. This model offers a sustainable, culturally-grounded path for inclusive education across Africa. Clinical Trial: This study was a quasi-experimental educational intervention conducted as part of a doctoral thesis at Lagos State University. As the research focused on task-shifting and capacity building among non-clinical educators in a real-world classroom setting, it was not prospectively registered in a clinical trial database.
However, the study underwent a rigorous 6-month administrative and ethical clearance process. Formal 'Approval to Proceed' was granted by the ACEITSE Research Committee following a successful pre-field defense. Furthermore, administrative gatekeeper permission was secured from the Permanent Secretary of the Public Service Office and the Board Chairman of the Lagos State Universal Basic Education Board (LASUBEB). The intervention was implemented under strict guidelines to ensure no disruption to classroom activities, following formal letters of introduction to the Educational Secretaries of all participating LGEAs
Background: Digital health platforms have the potential to expand access to HIV care by reducing geographic, social, and institutional barriers. However, among key populations such as men who have sex...
Background: Digital health platforms have the potential to expand access to HIV care by reducing geographic, social, and institutional barriers. However, among key populations such as men who have sex with men and transgender people living with HIV/AIDS in Nigeria, technology adoption is shaped by more than system functionality alone. Structural stigma, criminalisation, fear of disclosure, and limited health system access continue to constrain engagement with formal healthcare services. While teleconsultation and medication-delivery platforms offer alternative pathways to care, their acceptance and intended use within marginalised populations cannot be assumed and require empirical investigation. Objective: This study aimed to examine the factors influencing behavioural intention to adopt TechAids, a confidentiality-oriented digital health platform designed to support HIV consultation and service access among men who have sex with men and transgender individuals in Nigeria, using an extended Unified Theory of Acceptance and Use of Technology framework. Methods: A cross-sectional survey was conducted among 141 platform users and 27 healthcare providers, yielding a total sample of 168 participants. The study employed an extended UTAUT model incorporating Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions, alongside Trust and Perceived Stigma. Data were analysed using correlation analysis and ordinary least squares regression. Age and educational attainment were examined as potential moderating variables. Qualitative feedback was also collected to contextualise quantitative findings. Results: Facilitating Conditions emerged as the strongest predictor of behavioural intention to use the platform (β=0.813, p<0.001), explaining a substantial proportion of variance in adoption intention (R²=0.652). Age demonstrated a modest but statistically significant positive effect on behavioural intention (β=0.105, p=0.026). Contrary to theoretical expectations, Performance Expectancy, Effort Expectancy, Social Influence, Trust, and Perceived Stigma did not significantly predict intention to adopt the platform. Qualitative feedback highlighted practical infrastructure-related concerns, including reliable internet access, offline functionality, and availability of technical support, as more salient than psychological or feature-based considerations. Conclusions: In resource-constrained and highly stigmatised contexts, enabling infrastructure and practical support conditions may outweigh cognitive, social, and attitudinal determinants of technology acceptance. These findings suggest that successful digital health interventions for marginalised populations require not only privacy-conscious and user-centred design but also sustained investment in facilitating conditions that support real-world use. Designing technology for HIV care in such settings must therefore address structural and infrastructural barriers alongside behavioural and psychosocial factors.
The incidence of diabetes is outpacing the availability of primary care clinicians and endocrinologists. Individuals with diabetes see many health care professionals throughout their journey but one c...
The incidence of diabetes is outpacing the availability of primary care clinicians and endocrinologists. Individuals with diabetes see many health care professionals throughout their journey but one clinician is primarily responsible for routine management. Responsibility for diabetes care must be shared between clinicians, allied health professionals, and persons with diabetes to ensure effective treatment and reduce the burden of disease. Innovative collaborative care models, systems to improve continuity of care (e.g., care reminders), and integration of technology like continuous glucose monitoring into personalized treatment plans have the potential to increase patient engagement, and improve holistic and effective diabetes care. For innovative care models to succeed, barriers for both patients and clinicians need to be addressed to reduce therapeutic inertia, including appropriate training and education.
Background: Fitness and physical activity patterns are key predictors of cardiovascular disease. Traditionally, these factors have been assessed through participant self-report, which is prone to reca...
Background: Fitness and physical activity patterns are key predictors of cardiovascular disease. Traditionally, these factors have been assessed through participant self-report, which is prone to recall bias and inaccuracy. Smartphone-based monitoring provides a scalable and objective alternative for measuring physical activity, offering improved accuracy over conventional assessment methods. Objective: To evaluated the feasibility of smartphone-based cardiovascular research in the Netherlands and to examine associations between objectively measured physical activity, perceived activity, functional capacity, life satisfaction, and cardiovascular risk. Methods: Adults in the Netherlands were recruited via the MyHeart Counts iPhone app between August 2022 and December 2023. Within the app, participants completed surveys, passively shared motion sensor data, and were invited to perform a smartphone-based 6-minute walk test (6MWT). Perceived activity was compared with sensor-measured activity and actual activity (sensor-measured with supplemented self-reported unrecorded activity). Multivariable linear regression assessed associations between activity and 6MWT performance and between activity and life satisfaction. Perceived cardiovascular risk was compared with the difference between heart age and actual age. Results: Of 518 enrolled participants (median age 58 years; 72% female), 93% shared data beyond demographics. Median engagement duration was 27 days, and 58% completed at least one full consecutive week of motion tracking.
Perceived activity weakly correlated with both sensor-measured activity (ρ = 0.15, P = .01) and with actual activity (ρ = 0.15, P = .01). Median perceived activity was 3.5 hours/week, significantly higher than sensor-measured activity (0.9 hours/week; mean difference 2.9 hours, 95% CI 2.2–3.7; P < .001). In contrast, median actual activity was 3.2 hours/week and did not differ significantly from perceived activity (mean difference 0.7 hours, 95% CI −0.2 to 1.6; P = .11), indicating no significant over- or underestimation when unrecorded activity was accounted for.
Sensor-measured physical activity was associated with longer 6MWT distance (+10.1 m per hour; 95% CI 3.9-16.4, P = 0.002). No association was observed between sensor-measured activity and life satisfaction. Perceived cardiovascular risk correlated with the difference between heart age and actual age (ρ = 0.41; P < 0.001). Conclusions: Smartphone-based cardiovascular monitoring is feasible in a European adult population and yields valid functional correlates of physical activity. However, incomplete phone carriage substantially limits sensor-only activity estimates, underscoring the need for hybrid measurement strategies. These findings support the use of smartphone platforms for scalable cardiovascular research, while highlighting persistent challenges in engagement and measurement completeness.
Background: Sexual and gender diverse (SGD) populations experience inequities in health and in health professional education. Mapping how inclusive pedagogical practices address sexual and gender dive...
Background: Sexual and gender diverse (SGD) populations experience inequities in health and in health professional education. Mapping how inclusive pedagogical practices address sexual and gender diversity in pre‑registration nursing and midwifery education will clarify current approaches and gaps. Objective: To map how sexual and gender diversity is addressed through inclusive pedagogical practices in pre-registration nursing and midwifery education globally. Methods: This protocol follows the JBI guidance for scoping reviews and the PRISMA‑ScR reporting checklist. We will search CINAHL, MEDLINE, PubMed, Scopus, and Web of Science, plus targeted grey literature. Two reviewers will independently screen titles/abstracts and full texts and chart data using a piloted form. Data extraction will include pedagogical practices addressing sexual and gender diversity (teaching strategies, curriculum content, simulation, assessment), theoretical or conceptual frameworks, outcomes or indicators (knowledge, attitudes, skills, confidence, learning environment), and any identified equity considerations (attention to gender identity and sexual orientation, intersectionality). Synthesis and presentation of findings will be completed using the Patterns, Advances, Gaps, Evidence for practice, Research (PAGER) recommendations framework. The planned start for performing the scoping review is May 2026. Results: Results are expected to inform curriculum development and guide future systematic reviews. Conclusions: This review will provide a comprehensive map of inclusive pedagogical practices addressing sexual and gender diversity in pre‑registration nursing and midwifery education, identify gaps, and inform curriculum development and educator training. Clinical Trial: The review will adhere to PRISMA‑ScR. The protocol has been registered on the Open Science Framework (OSF).
Background: Type 2 respiratory failure (T2RF) is a common and high-risk presentation in the emergency department (ED). Non-invasive positive pressure ventilation (NIPPV) is a standard first-line thera...
Background: Type 2 respiratory failure (T2RF) is a common and high-risk presentation in the emergency department (ED). Non-invasive positive pressure ventilation (NIPPV) is a standard first-line therapy for T2RF, particularly in acute exacerbations of chronic obstructive pulmonary disease and cardiogenic pulmonary edema. However, NIPPV has limitations including discomfort, claustrophobia, air leaks, skin injury, and aspiration, which may compromise tolerance and lead to treatment failure or escalation to endotracheal intubation. High-velocity nasal insufflation (HVNI), particularly via a single-prong asymmetric cannula configuration, has been proposed to enhance dead-space washout and improve patient comfort. Despite growing interest, robust evidence evaluating HVNI in heterogeneous, all-cause T2RF populations in the ED remains limited. Objective: This study aims to determine whether HVNI delivered via a single-prong nasal cannula is non-inferior to standard NIPPV in improving ventilation among adult ED patients with T2RF from any cause. Methods: This is a single-center, open-label, non-inferiority randomized controlled trial conducted in the ED of a tertiary academic center. Adults 21 years and above with T2RF defined as PaCO2 >45 mmHg and pH <7.35, requiring ventilatory support, will be randomized 1:1 to HVNI with single-prong cannula or standard NIPPV. Allocation will be concealed via an independent web-based platform using variable block sizes (4 and 6). The primary outcome is percentage change in PaCO2 from baseline to 60 minutes after initiation of therapy. Eighty-four patients provide 80% power, one-sided alpha 2.5% assuming standard deviation 6.65% and non-inferiority margin 4.3%. Analyses will use intention-to-treat and per-protocol approaches, and sensitivity analyses to address missing arterial blood gas results. Predefined failure criteria (persistent or worsening acidosis, rising PaCO2, refractory hypoxemia, severe intolerance, clinical deterioration) would trigger crossover or escalation per protocol. Results: Recruitment commenced in 2026 at the study site and is ongoing. Data analysis will be undertaken following completion of enrollment. Conclusions: Should HVNI with single-prong cannula be shown to be non-inferior to NIPPV, it may offer a practical alternative to treatment of T2RF in the ED when mask-based interfaces are poorly tolerated or contraindicated, potentially reducing the need for more invasive interventions like endotracheal intubation and mechanical ventilation. Clinical Trial: This trial was prospectively registered on 15 July 2025 with ClinicalTrials.gov (#NCT07065656).
Background: Knee osteoarthritis (KOA) is a prevalent degenerative joint disorder that causes chronic pain and functional impairment, particularly among middle-aged and older adults. In early-stage KOA...
Background: Knee osteoarthritis (KOA) is a prevalent degenerative joint disorder that causes chronic pain and functional impairment, particularly among middle-aged and older adults. In early-stage KOA, patients often exhibit reduced periarticular muscle strength and compromised joint function. Without timely intervention, the condition may progress to irreversible structural damage. In Traditional Chinese Medicine (TCM), this stage frequently corresponds to the Liver-Kidney Deficiency pattern, characterized by insufficient nourishment of tendons and bones and diminished muscular function—representing a critical window for therapeutic intervention. Herbal fumigation and Blood Flow Restriction Training (BFRT) each offer advantages in improving local circulation and enhancing muscle strength, respectively; however, clinical evidence for their combined application remains insufficient. Objective: This study aims to systematically evaluate the clinical efficacy and safety of Zhang’s Formula No. 2 fumigation combined with BFRT in middle-aged and elderly patients diagnosed with early-stage KOA and TCM-defined Liver-Kidney Deficiency syndrome. Methods: This is a prospective, single-center, randomized, single-blind, four-arm, parallel-group controlled clinical trial. A total of 140 patients with KOA meeting both the Kellgren-Lawrence grade I–II radiographic criteria and the TCM diagnostic criteria for Liver-Kidney Deficiency syndrome (as outlined in the Guiding Principles for Clinical Research on New Chinese Medicines) will be enrolled. Participants will be randomly assigned in a 1:1:1:1 ratio to one of four groups:Control group (usual care), Fumigation group, BFRT group, or Combined intervention group. The intervention period will last 8 weeks, with follow-up assessments conducted at week 12. The primary outcome is the total score of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Secondary outcomes include: pain intensity measured by the Visual Analogue Scale (VAS), TCM syndrome severity score, the 36-Item Short Form Health Survey (SF-36), knee extensor muscle strength, and the Clinical Global Impression of Change (CGI-C). All outcomes will be assessed by blinded evaluators at baseline (week 0) and at weeks 4, 8, and 12. Results: Participant recruitment is scheduled to begin in January 2026, with enrollment and interventions expected to be completed by June 2027. Data collection will conclude by October 2027, and primary statistical analysis and manuscript preparation are anticipated in early 2028. Conclusions: Through a rigorously designed randomized controlled trial, this study will investigate the feasibility, efficacy, and safety of an integrative intervention combining TCM external therapy (herbal fumigation) with modern low-load resistance training (BFRT) in patients with early-stage KOA and Liver-Kidney Deficiency. The findings are expected to inform clinical practice and support further research into non-pharmacological, multimodal strategies for managing early KOA within an integrative medicine framework. Clinical Trial: International Traditional Medicine Clinical Trial Registry (ChiCTR), Registration Number: ChiCTR2200066875. Date Registered: December 15, 2025.
https://itmctr.ccebtcm.org.cn/
This feasibility study of daily voice-based AI health monitoring among 14 elderly Japanese adults (mean age 71 years) within an 8-week group-based longevity intervention demonstrated 73% median daily...
This feasibility study of daily voice-based AI health monitoring among 14 elderly Japanese adults (mean age 71 years) within an 8-week group-based longevity intervention demonstrated 73% median daily engagement and generation of 4,876 qualitatively rich conversational messages, suggesting voice AI succeeds not by replacing human connection but by mediating it within social support structures.
Background: Multidomain dementia-prevention interventions delivered via apps have the potential to reach large populations. However, existing trials have tended to recruit more socioeconomically advan...
Background: Multidomain dementia-prevention interventions delivered via apps have the potential to reach large populations. However, existing trials have tended to recruit more socioeconomically advantaged participants, raising concerns that the resulting interventions may be less usable or engaging for some groups of older adults, particularly those from minority ethnic, lower educational, or lower socioeconomic backgrounds, who are at higher risk of dementia. ENHANCE was designed to address this by prioritising accessibility and engagement across diverse user groups, with the goal of developing an intervention that is acceptable and effective for all. Objective: This study evaluated the usability and user experience of the ENHANCE prototype during a one-week at-home supported-use test and explored factors influencing engagement among older adults. Methods: We purposively recruited 10 adults aged 60–80 years without dementia for a one-week mixed-methods usability evaluation, consistent with recommended sample sizes for identifying major usability issues. Participants were recruited through community settings including groups underrepresented in dementia-prevention trials and had at least one of 10 pre-specified dementia risk factors. They attended a face-to-face onboarding session with a coach, used the ENHANCE app at home for seven days with ongoing coach support, and completed a post-test interview and an eight-item satisfaction survey. We analysed quantitative data, including app usage metrics against prespecified minimum use targets and satisfaction survey responses descriptively, alongside reflexive thematic analysis of qualitative data from onboarding sessions, post-test interviews, coaching calls, and in-app message. Results: Participants represented a wide range of neighbourhood deprivation (Index of Multiple Deprivation deciles 1–8), with four from ethnic minority backgrounds. All met prespecified minimum-use targets (watching a module video, completing a check-in, and playing assigned games at least once), and many demonstrated additional voluntary engagement (e.g., repeated gameplay, video rewatching, and use of messaging or phone support). Survey responses indicated high satisfaction, perceived usefulness, and ease of use; 90% intended to continue using the app and 80% would recommend it to peers. Qualitative analysis identified engagement facilitators, including rewarding game design supporting trial-and-error learning, familiar interfaces and game conventions, appropriately challenging gameplay, consistent virtual rewards, trusted expert information combined with peer stories, and coach support with hands-on practice and follow-up. Barriers included unclear visual cues, limited accommodation of motor or sensory impairments, and visual discomfort in some games, highlighting targets for refinement. Conclusions: Older adults recruited via community settings serving underrepresented groups found the ENHANCE prototype usable, acceptable, and engaging over one week of supported at-home use. Participants highlighted human coaching, inclusive design, and integration of expert and peer narratives as key drivers of engagement. These findings support further feasibility testing to examine longer term engagement and provide design insights to inform development of more inclusive digital health interventions. Clinical Trial: ISRCTN17060879
Background: Acupuncture is one of the most commonly used interventions for pain management, particularly within traditional Chinese medicine (TCM) and Korean medicine (KM). It is a therapeutic techniq...
Background: Acupuncture is one of the most commonly used interventions for pain management, particularly within traditional Chinese medicine (TCM) and Korean medicine (KM). It is a therapeutic technique that involves the insertion of fine needles into anatomically defined points on the body. Accurately locating and needling these acupoints is inherently difficult due to the subtle anatomical variations and the precision required in point identification. Objective: This scoping review maps the landscape of immersive Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in acupuncture education, aiming to address the safety and standardization limitations of traditional training. Methods: We conducted a comprehensive search across six databases, including PubMed and CNKI, for studies applying immersive XR technologies to acupuncture training. Twelve primary studies published between 2008 and 2025 were selected for analysis. Results: VR and MR were primarily utilized for visualizing needling depth and internal anatomy, whereas AR demonstrated high utility for surface acupoint localization. Recent advancements include markerless tracking and haptic feedback integration, although challenges regarding hardware accessibility and tactile realism persist. Conclusions: Immersive XR technologies provide a safe and interactive environment for standardized acupuncture skills acquisition. Future development should focus on enhancing haptic fidelity and expanding anatomical scope to better bridge the gap between virtual training and clinical practice.
Health digital twins, computational models that integrate longitudinal data, simulation, and forecasting, are increasingly proposed as tools for chronic care management. Most current implementations,...
Health digital twins, computational models that integrate longitudinal data, simulation, and forecasting, are increasingly proposed as tools for chronic care management. Most current implementations, however, are expert-oriented, prioritizing technical optimization and clinical prediction while offering limited support for patient understanding, engagement, or participation. This orientation is particularly misaligned with chronic care, which unfolds largely outside clinical settings and depends on patients’ daily decisions, social context, and sustained engagement over time.
In this Viewpoint, we argue for reframing digital twins as participatory systems that support shared sensemaking among patients, caregivers, and clinicians, rather than functioning solely as directive, expert-facing tools. We propose a conceptual framework that positions participatory digital twins as boundary objects capable of bridging computational models, clinical reasoning, and lived experience. Within this framework, generative artificial intelligence serves as a translation and interaction layer, enabling plain-language dialogue, exploration of uncertainty, and “what-if” reasoning that allows users to interpret model outputs in relation to their own contexts, goals, and constraints.
We outline key design principles for participatory digital twins, including visible uncertainty, negotiated rather than prescriptive care, mechanisms for incorporating patient context and social drivers of health, and governance structures that support accountability and recourse. By shifting the focus from optimization alone to understanding, interaction, and trust, participatory digital twins offer a pathway toward more equitable, human-centered, and sustainable models of AI-enabled chronic care.
Background: Stroke is a leading cause of long-term disability and often transfers substantial care responsibilities to family and informal caregivers. These demands contribute to multidimensional care...
Background: Stroke is a leading cause of long-term disability and often transfers substantial care responsibilities to family and informal caregivers. These demands contribute to multidimensional caregiver burden and reduced quality of life (QoL), including psychological distress, social limitations, and financial strain. Digital health interventions—such as mobile applications, messaging-based education, telehealth, and web-based platforms—have the potential to extend caregiver support beyond conventional face-to-face services; however, evidence regarding their impact on caregiver QoL remains heterogeneous. Objective: This scoping review aimed to map and characterize digital health interventions used in stroke caregiving and to summarize their associations with caregiver QoL–related outcomes, including caregiver burden, psychological well-being, empowerment or capability, usability, and access. Methods: A scoping review was conducted in accordance with Joanna Briggs Institute (JBI) guidance and reported following PRISMA-ScR. Searches were performed in PubMed, Scopus, Web of Science, CINAHL, and Google Scholar for English-language studies published between 2019 and 2025. Two reviewers independently screened studies and extracted data using a standardized charting form. Evidence was mapped descriptively by intervention type, delivery characteristics, study design maturity, and caregiver outcome domains. Results: From 676 identified records, 20 studies met the inclusion criteria. Digital interventions were primarily delivered through mobile applications, WhatsApp-based education, telehealth services, or web-based learning platforms. Direct caregiver-focused studies commonly assessed caregiver burden, psychological distress, and caregiving capability, while system-integrated mHealth programs mainly reported patient outcomes with indirect relevance to caregivers. Overall, digital education and follow-up support were associated with reduced caregiver burden and improved caregiver capability and emotional well-being, although outcome measures and follow-up durations varied. Usability, digital literacy, affordability, and connectivity were recurrent barriers. Conclusions: Digital health interventions show promise in improving caregiver QoL in stroke care, particularly through structured education and ongoing support. Future studies should emphasize rigorous caregiver-centered trials, standardized QoL measures, longer follow-up, and inclusive designs addressing digital equity.
Background: Digital technologies have the potential to support proactive identification of early signs of medicine-related harms, including changes in sleep, physical activity, and cognition. The use...
Background: Digital technologies have the potential to support proactive identification of early signs of medicine-related harms, including changes in sleep, physical activity, and cognition. The use of a centralised digital platform to support pharmacists in monitoring longitudinal health data and detecting medicine-related harms in this setting has not been evaluated. Objective: To develop and assess the feasibility of a digitally enabled pharmacist service to monitor signs and symptoms of medicine-related harms in residential aged care. Methods: The study was conducted in two phases. In Phase I, the establishment phase, health and medication data from participants’ records were exported into the TeleClinical Care (TCC-ADEPT) digital platform. Phase II comprised a 12-week feasibility study with assessments conducted at baseline, 4 weeks, 8 weeks, and 12 weeks. During this phase, the on-site residential aged care pharmacist monitored all participants using the centralised TCC-ADEPT platform.
The digital technology intervention included collection of digital biomarkers to supplement information from patient care record and medication chart with subsequent display as longitudinal visualisations of change in residents’ health and medicine use using a cloud-based monitoring platform; TeleClinical Care. The aged care pharmacist monitored residents’ clinical, medicine, sleep, and physical activity data to identify signs and symptoms of medicine-related harms using the centralised platform and notified the residents’ general practitioners when necessary.
The RE-AIM framework was used to evaluate the feasibility of the digitally informed pharmacist service. Assessments included service reach, changes in resident symptom scores as measured by the Edmonton Symptom Assessment Scale, medicine use, number of adverse events, cognitive scores as measured by the Montreal Cognitive Assessment, sleep and physical activity as measured by sleep sensor and accelerometer, number and types of pharmacist recommendations to general practitioners (GPs), and qualitative interviews. Results: Twenty-nine participants were enrolled in the study, with 27 completing the 12-week assessments. The average age was 86 years old, and 65% were female. There was a significant decrease in total numbers of adverse events at 12-weeks compared to baseline (45 at baseline, 27 at 12-weeks; p=0.006). There were no significant differences in changes in symptom scores, medicine use, cognitive scores, sleep, and physical activity. Overall, the pharmacist made 25 recommendations to the participants’ GP; with half (n=13, 52%) being implemented.
Five residents, one family member, the on-site pharmacist, three staff members, and two members of senior management were interviewed to understand their views of the pharmacist service as well as facilitators and barriers to its delivery. Overall, participants reported positive views of the service, and senior management indicated an intention to continue using the service. Conclusions: Our findings suggest that the digitally informed pharmacist service is feasible and has the potential to reduce adverse events due to medicines within the aged care setting. Clinical Trial: ACTRN12623000506695
Background: Early detection of adolescent idiopathic scoliosis (AIS) is critical for timely intervention and optimal clinical outcomes. Conventional radiography, the current reference standard, is poo...
Background: Early detection of adolescent idiopathic scoliosis (AIS) is critical for timely intervention and optimal clinical outcomes. Conventional radiography, the current reference standard, is poorly suited for large-scale screening because of cumulative ionizing radiation exposure and concerns related to privacy and patient acceptability. Objective: This study aimed to evaluate the diagnostic accuracy of a millimeter-wave imaging system for scoliosis screening, using radiographic Cobb angle measurements as the reference standard. Methods: In this prospective diagnostic accuracy study, 132 consecutive pediatric outpatients (aged 6-23 years) with suspected scoliosis underwent a 2-second millimeter-wave scan of the back without removing clothing, followed by standard standing full-spine radiography. Scoliosis was defined as a Cobb angle of ≥10°. Millimeter-wave images were evaluated for established morphological indicators of spinal asymmetry, including shoulder height asymmetry, trunk lateral shift, waistline contour asymmetry, and lower limb height discrepancy. Diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated in accordance with STARD reporting guidelines. Participants older than 18 years were included to reflect real-world outpatient screening practice. Results: Radiographic assessment identified scoliosis in 98 of 132 participants (74.2%). Millimeter-wave imaging achieved an overall accuracy of 86.4% (95% CI 76.5-94.7), with a sensitivity of 85.7% (95% CI 75.1-96.5) and a specificity of 88.2% (95% CI 70.7-97.6). All scans were completed within 2 seconds and maintained full patient privacy. Conclusions: Millimeter-wave imaging is a feasible, rapid, and nonionizing modality for scoliosis screening. Its high sensitivity supports its use as a first-line screening tool in school and outpatient settings, enabling targeted referral for confirmatory radiography while adhering to the ALARA principle.
Background: Despite the high accuracy of machine learning models in predicting diabetes, clinical adoption remains limited due to the "black-box" nature of advanced algorithms. In regional healthcare...
Background: Despite the high accuracy of machine learning models in predicting diabetes, clinical adoption remains limited due to the "black-box" nature of advanced algorithms. In regional healthcare contexts like Ethiopia, fostering clinician trust is essential for the successful implementation of AI-driven tools. Objective: This study aims to develop a trustworthy Clinical Decision Support System (CDSS) for diabetes prediction by operationalizing the Asan et al. (2020) trust framework focusing on Ability, Integrity, and Benevolence through the integration of Explainable AI (XAI) techniques. Methods: A multi-national dataset of clinical biomarkers was utilized. To ensure model Ability, we employed a robust preprocessing pipeline including Standardization and SMOTE for class balancing. Five architectures were compared: Logistic Regression, Random Forest (RF), Gradient Boosting, XGBoost, and LightGBM. To establish Integrity, this research utilized SHAP (SHapley Additive exPlanations) for global and local transparency. To demonstrate Benevolence, this research applied Diverse Counterfactual Explanations (DiCE) based on Miller’s (2017) theory of contrastive explanation. Results: The Random Forest model emerged as the superior architecture, achieving high Macro-averaged AUC and F1-scores. SHAP global analysis validated model integrity by identifying HbA1c, Age, and BMI as the primary diagnostic drivers, aligning with international clinical guidelines. DiCE generated patient-specific "what-if" scenarios, providing clinicians with actionable targets for lifestyle intervention. Preliminary evaluation suggests that providing both "why" (SHAP) and "how to change" (DiCE) explanations significantly enhances perceived clinician trust compared to standard accuracy-only outputs.The final model was deployed as an interactive Streamlit application 1. Conclusions: Integrating SHAP and Counterfactual analysis transforms predictive AI into a prescriptive clinical partner. By providing actionable insights that clinicians can relate to their medical knowledge, this integration initiates a culture of XAI examination for clinicians, paving the way for more transparent and patient-centered digital health interventions. Clinical Trial: term1
Background: Frailty is a multidimensional clinical syndrome characterized by diminished physiologic reserve and increased vulnerability to stressors, thus putting older adults at higher risk of advers...
Background: Frailty is a multidimensional clinical syndrome characterized by diminished physiologic reserve and increased vulnerability to stressors, thus putting older adults at higher risk of adverse outcomes (e.g., falls, mental and physical disability, hospitalization, mortality) in response to even minor stress events. Frailty can be reversed or at least attenuated if detected early, yet early identification remains challenging in primary care due to time- and resource-intensive assessment methods. Artificial intelligence (AI) offers promise in automating frailty identification at the point of care. Natural Language Processing (NLP) is particularly valuable for extracting frailty indicators from rich text data stored in electronic health records, but its limited interpretability has prompted growing interest in augmenting the NLP processes with the use of explainable AI (XAI) techniques. Although NLP and XAI methods have been applied for chronic disease identification, their use for frailty identification has not yet been systematically examined. Objective: This scoping review aimed to synthesize current evidence on the use of NLP and XAI methods for automating frailty identification in older adults. Methods: Peer-reviewed studies published in English between January 2015 and November 2025 were eligible if they applied AI, NLP, or XAI methods to identify frailty in adults aged ≥50 years using real-world health data from OECD or OECD-partner countries. Searches were performed in PubMed and Google Scholar and supplemented by screening bibliographies of identified studies. Data were extracted using a standardized form that captured study characteristics, sample size, data sources, and specific aspects of the AI models, and NLP and XAI methods used. Results: We identified 24 studies that satisfied the eligibility criteria. While all studies used AI approaches to identify frailty, only six used neural network-based models. Logistic regression was the most frequently used AI method (n=14), and only one study employed Bidirectional Encoder Representations from Transformers (BERT). Seven studies relied on both structured and unstructured data, two relied exclusively on structured data only, and the rest relied exclusively on unstructured data. Seven studies used NLP methods, seven used XAI methods, and only one integrated both. Only two studies reported deploying their models in real clinical settings. Conclusions: AI-based approaches show promise for automating frailty identification, yet current applications remain limited by reliance on traditional machine learning models, underuse of NLP and XAI methods, and very little real-world deployment. Future work should focus on developing explainable NLP models, facilitating access to large volumes of unstructured data, and developing standardized frameworks for the systematic evaluation of NLP and XAI methods. Coordinated efforts across clinical, technical, and regulatory domains are essential to develop scalable, transparent, and clinically meaningful AI systems for frailty identification.
Background: Anticoagulated patients with atrial fibrillation (AF) face significant bleeding risks, which current risk scores inadequately predict. Pulse pressure (PP), a marker of arterial stiffness,...
Background: Anticoagulated patients with atrial fibrillation (AF) face significant bleeding risks, which current risk scores inadequately predict. Pulse pressure (PP), a marker of arterial stiffness, may offer additional prognostic value. Objective: This study aimed to evaluate whether elevated PP independently predicts major bleeding events. Methods: We conducted a retrospective cohort study using electronic health records from 4,935 AF patients on oral anticoagulation (2010–2019) in the REACHnet network. PP was calculated from outpatient blood pressure readings and analyzed in tertiles and as a continuous variable. Kaplan-Meier curve and log-rank test were conducted to assess the association between PP and clinical outcomes. Cox regression models further adjusted for demographics, comorbidities, systolic blood pressure, medications, and the ORBIT bleeding score. Results: Over a median 5-year follow-up, 677 patients (13.7%) experienced major bleeding. GI bleeding was significantly more frequent in the highest PP tertile (p = 0.007), while intracranial and other bleeding types showed no significant differences. Each 10 mmHg increase in PP was associated with a 15% higher risk of GI bleeding (HR: 1.014; p = 0.042), and this association remained significant after adjusting for systolic blood pressure and the ORBIT score (OR: 1.013 per mmHg; p = 0.028). PP was not significantly associated with intracranial, other, or overall bleeding. Conclusions: Pulse pressure independently predicts gastrointestinal bleeding in anticoagulated AF patients, even after accounting for traditional bleeding risk factors. These findings support the inclusion of PP in future risk stratification models and clinical monitoring strategies. Clinical Trial: N/A
Background: Bipolar disorder (BD) is a complex and heterogeneous psychiatric condition, characterized by fluctuating clinical courses that affect approximately 1-2% of the global population. Despite p...
Background: Bipolar disorder (BD) is a complex and heterogeneous psychiatric condition, characterized by fluctuating clinical courses that affect approximately 1-2% of the global population. Despite pharmacological advances, treatment response varies significantly among patients, making the identification of individualized treatment strategies a major challenge. Recently, artificial intelligence (AI) has emerged as a powerful approach in precision psychiatry to identify subtle patterns in complex data and inform personalized clinical decisions. Objective: Provide a structured synthesis of current evidence on AI-supported treatment optimization in BD spectrum. Methods: This systematic review was conducted in accordance with the PRISMA 2020 guidelines. Four databases (PubMed, Web of Science, Scopus, and EMBASE) were searched for original studies published after 2015 on the application of AI in the treatment of BD in adult patients. The methodological quality, risk of bias, and clinical applicability of the predictive models were assessed using the PROBAST+AI tool. Results: A total of 35 studies were included, divided into three main categories. Treatment response prediction, focused primarily on lithium response, with accuracies up to 100% in multimodal models. Relapse risk prediction, where models demonstrated feasibility in predicting relapses and rehospitalizations with AUCs between 65% and 85%. Patient stratification, used to identify clinical subgroups and pharmacological profiles, with excellent predictive capabilities (AUCs up to 99%). However, the PROBAST+AI assessment revealed a high risk of bias in most studies, primarily due to data analysis limitations, small sample sizes, and lack of external validation. Conclusions: The adoption of AI tools in BD serves as a driver for therapeutic optimization, although current AI tools in BD should still be considered exploratory rather than ready for clinical use. Effective implementation in real-world clinical scenarios requires more robust, transparent, and externally validated models to ensure reliability and generalizability.
Background: Background: Lateral neck lymph node metastasis (LLNM) is a major determinant of recurrence risk and surgical strategy in papillary thyroid carcinoma (PTC). However, accurate preoperative i...
Background: Background: Lateral neck lymph node metastasis (LLNM) is a major determinant of recurrence risk and surgical strategy in papillary thyroid carcinoma (PTC). However, accurate preoperative identification of LLNM remains challenging, as conventional imaging assessment is limited by operator dependency and variable diagnostic performance. Although several predictive models have been proposed, many suffer from limited generalizability or poor interpretability, hindering their integration into clinical decision-making. Objective: Objective: Preoperative accurate prediction of LLNM in PTC remains challenging, and existing models have limitations such as poor interpretability or restricted applicability. This study aimed to develop and validate an interpretable machine learning (ML) model based on routine clinical and ultrasound data to predict LLNM risk in PTC patients. Methods: Methods: A retrospective cohort study enrolled 816 PTC patients (June 2022-May 2024), randomly split into training (n=571) and internal validation (n=245) sets at a 7:3 ratio, with an independent external validation cohort of 178 patients (June 2024-May 2025). Clinical, laboratory, and routine ultrasound data were collected. Feature selection employed a three-step approach: (1) univariate and multivariate logistic regression (LR) analysis, (2) Boruta-SHAP algorithm for importance ranking, and (3) clinical expert validation to ensure clinical relevance. Nine ML models were developed, with hyperparameter tuning via grid search and 10-fold cross-validation. Model performance was evaluated using metrics such as area under the receiver operating characteristic curve (ROC), sensitivity, specificity, and F1-score. The SHapley Additive exPlanations (SHAP) method was used for model interpretation. Results: Results: Eight independent risk factors were identified: gender, multifocality, age, tumor diameter, tumor location, capsular invasion, central lymph node metastasis, and uneven lateral cervical lymph node hilum echo. The Gradient Boosting Machine (GBM) model demonstrated optimal performance with an AUC of 0.905 (95% CI: 0.868-0.942), sensitivity of 0.831, specificity of 0.840, and F1-score of 0.764 in internal validation. External validation confirmed robust generalizability (AUC: 0.887, 95% CI: 0.840-0.934).SHAP analysis revealed that tumor size, gender, lateral cervical lymph node echo, central lymph node metastasis, and capsular invasion were the top five contributors to high LLNM risk, and provided individualized risk interpretation. Conclusions: Conclusion: This interpretable GBM model, based on routinely accessible clinical and ultrasound data, enables accurate preoperative LLNM risk stratification, supporting personalized decisions on the extent of lymph node dissection and potentially reducing unnecessary prophylactic surgery while ensuring adequate treatment for high-risk patients.
Background: Depression is prevalent among adolescents and young adults and often requires aftercare following inpatient treatment. Although effective outpatient aftercare exists, many patients face di...
Background: Depression is prevalent among adolescents and young adults and often requires aftercare following inpatient treatment. Although effective outpatient aftercare exists, many patients face difficulties in maintaining treatment gains and remain without professional support after being discharged. Digital mental health interventions hold promise for bridging this care gap; however, evidence of their effectiveness is limited. Objective: This protocol outlines a study evaluating the effectiveness, cost-effectiveness, and implementation of a chatbot-assisted smartphone intervention (iCAN) designed to support youth following inpatient treatment for depression. Methods: This is a prospective, two-armed, mixed-methods randomized controlled trial. The targeted sample size is n = 368 patients aged 13–25 years with depressive disorders who were receiving inpatient treatment and, additionally, n = 18 healthcare providers. Participants in the intervention group received care as usual plus an e-coach-guided intervention for 90 days, whereas the control group received care as usual only. Assessments were conducted at baseline and 6 weeks, 3 months, and 6 months after randomization. The primary outcome is clinician-rated severity of depression symptoms. Secondary outcomes include remission rates, general psychopathology, quality of life, uptake of aftercare services, cost-effectiveness, mechanisms of action, as well as acceptability, and usability. Outcome evaluation will use linear mixed models based on the intention-to-treat principle, and process evaluation will be conducted via content analysis. Results: We enrolled n = 228 patients from 31 hospitals across Germany between November 2023 and December 2024. Data collection (6 months follow-up) was completed June 2025. Data analysis is currently in progress, with the first results expected to be published by the end of 2026. Conclusions: This study will provide evidence for the effectiveness and cost-effectiveness of a guided digital mental health intervention for post-discharge aftercare of youth treated for depression in an inpatient setting. It will offer insights into the implementation of the intervention in routine care. If proven effective, iCAN may serve as a blueprint for remote aftercare for young people with depression. Clinical Trial: This trial is registered in the German register for clinical trials (DRKS-ID: DRKS00032966)
Background: Mild cognitive impairment (MCI) is recognized as a critical stage for dementia prevention. Physical activity is an important intervention to prevent cognitive decline, but challenges still...
Background: Mild cognitive impairment (MCI) is recognized as a critical stage for dementia prevention. Physical activity is an important intervention to prevent cognitive decline, but challenges still remain in improving or maintaining cognitive function in older adults with MCI through increased physical activity. Personalized mobile health (mHealth) promotion strategies based on the Behaviour Change Wheel (BCW) hold promise for enhancing physical activity levels in this population. Objective: This study aims to evaluate the feasibility and preliminary effectiveness of a personalized mobile application (App) named ActiveAide, developed based on the BCW framework, for promoting physical activity among older adults with MCI. Methods: This feasibility study employed a single‑arm, pre‑ and post‑test design. 18 participants received an 8‑week personalized intervention via ActiveAide. Feasibility measures included recruitment rate, retention rate, App usage data, App usability evaluation, and user experience with the App. Effectiveness measures encompassed physical activity level, physical fitness, physical activity self‑efficacy, and social support. Quantitative data were analyzed using paired‑sample t‑tests and Wilcoxon signed‑rank tests, while qualitative data underwent content analysis. Results: The study achieved a recruitment rate of 90.9% and a retention rate of 90%. The mean strategy completion rate was 78.5%, with the mean number of App accesses of 71. The mean System Usability Scale (SUS) score was 74.86 ± 8.81, indicating good usability. Qualitative interviews identified three themes: strengths of MotiveAide, limitations of MotiveAide, and suggestions to improve MotiveAide. Post-intervention, statistically significant improvements were observed in participants’ physical activity level (P<0.001), physical activity self-efficacy (P<0.001), VO2max (P<0.001), strength assessment score (P=0.002), and body composition measures including total physical score (P<0.001), fat mass (P=0.001), and body fat percentage (P<0.001). No significant change was found in the level of social support. Conclusions: The personalized mHealth application ActiveAide, developed based on the BCW framework, demonstrated good feasibility and preliminary effectiveness in promoting physical activity among older adults with MCI. Future research could further optimize the application’s features and employ more rigorous designs, such as randomized controlled trials, to validate its long-term efficacy and generalizability.
Health data interoperability is the central hill climb in contemporary digital
health. Hospitals often accumulate data like mismatched spare parts, catalogued
inconsistently, and difficult to re-use...
Health data interoperability is the central hill climb in contemporary digital
health. Hospitals often accumulate data like mismatched spare parts, catalogued
inconsistently, and difficult to re-use across care. The landscape of non-annotated
source systems, legacy data warehouses that lack interoperable data models,
the coexistence of multiple terminologies with divergent scopes, the operational
turbulence of system migrations, and the persistent challenges of metadata catalogues
and versioning set a starting point to a journey in building a semantic layer
that makes data Findable, Accessible, Interoperable, and Reusable (FAIR), and
that remains robust as terminologies evolve. Terminology updates are complex
and their terms, classifications, and regulations continually change. This viewpoint
article gives an exemplary historical overview at a Swiss university hospital,
highlights the relevance of key decisions and projects and contrasts local conditions
with the Swiss and European context. It notes perspectives of large clinical
information systems and highlights organizational implications, tools and models
needed, and the challenge of legacy data. It dives into project work of ontology
creation. The discussion reflects on achievements and the future illustrating the
cadence and resilience required to ride interoperable data “around the world”.
Key Message. Achieving healthcare interoperability requires balancing diverse
standards, terminologies, and data governance. The FAIR principles provide a
framework. Organizational commitment to these practices is essential.
Background: Delirium superimposed on dementia is associated with poor outcomes yet remains underdetected in home settings. Current detection relies on face-to-face clinical assessment (e.g., the Confu...
Background: Delirium superimposed on dementia is associated with poor outcomes yet remains underdetected in home settings. Current detection relies on face-to-face clinical assessment (e.g., the Confusion Assessment Method (CAM) criteria) rarely applied outside hospitals. Objective: This proof-of-concept study developed a theory-driven framework for detecting delirium-consistent anomalous patterns in home-dwelling people with dementia, using passive smart home sensor data. Methods: Individualized anomaly detection algorithms, including Isolation Forest and Long Short-Term Memory (LSTM) models, were applied to identify delirium-related anomalies within each participant. Predictor features consisted of theory-driven digital markers approximating key CAM criteria, including agitation, disrupted sleep–wake cycles, and disorientation (indexed by activity entropy), along with clinically relevant indicators such as physiological instability (early warning scores) and urinary tract infections. Multimodal smart-home sensor data from 17 individuals with dementia were analyzed. Results: Using matched thresholds, Isolation Forest and LSTM models each identified 71 anomalies, with the Isolation Forest detecting a median of 10.2% anomalous days per individual and anomalies typically occurring in short temporal clusters; agreement between methods was 17%. Feature importance analyses indicated that activity entropy, sleep quality, and early warning scores were the most influential features, with stronger inter-feature correlations observed during anomaly versus non-anomaly periods. Conclusions: This study demonstrates technical feasibility of detecting delirium-related anomalies through passive smart home monitoring. While lacking ground truth validation, the approach shows promise for early intervention in community settings. Future validation studies with clinically confirmed delirium labels are essential. Clinical Trial: not applicable
Background: Since 2011, Türkiye has become the primary destination for Syrian refugees. While healthcare is a fundamental human right, public discourse surrounding refugee health services can influen...
Background: Since 2011, Türkiye has become the primary destination for Syrian refugees. While healthcare is a fundamental human right, public discourse surrounding refugee health services can influence policy and social cohesion. Objective: The objective of our study was to examine 14 years of Turkish health-related discourse on platform X (formerly Twitter) to identify evolving sentiment, stance, and key grievances. Methods: From a dataset of 4.5 million tweets (2009-2022), 116,172 health-related posts were identified. We employed a fine-tuned Turkish BERT-based large language model to perform multi-task classification for sentiment, stance, and health topics. Tweets were categorized into five domains as Provision of Healthcare Services, Financing and Coverage, Human Resources, Public Health and Disease Prevention, and Access to Medications and Pharmaceutical Services. Lift scores and heatmaps were used to analyze the relationship between the keywords and public attitudes. Results: The fine-tuned Turkish BERT model achieved high classification performance with a weighted F1 score of 0.85 for sentiment and 0.8 for stance detection. Public discourse shifted from neutral or positive tones in 2011 to overwhelming negativity over time. By 2021, negative sentiment reached 79.9%, and anti-refugee stance peaked at 78.3%. Prominent topics evolved from Provision of Healthcare Services (47.5% in 2011) to Public Health and Disease Prevention (57.3% in 2021) and Human Resources (34.6% in 2022). High lift scores revealed that anti-refugee stances were strongly associated with keywords such as ‘appointment’, ‘vaccine’, and ‘free’. Conclusions: There is a marked and consistent rise in anti-refugee sentiment within Turkish digital health discourse, often fueled by misinformation and perceived systemic strain. Public health authorities should prioritize evidence-based communication strategies to counter digital polarization and ensure the legibility of health policies for the host population.
Background: Lumbar disc herniation (LDH) is one of the leading causes of low back and leg pain. Although percutaneous endoscopic lumbar discectomy (PELD) is a useful procedure with a minimally invasiv...
Background: Lumbar disc herniation (LDH) is one of the leading causes of low back and leg pain. Although percutaneous endoscopic lumbar discectomy (PELD) is a useful procedure with a minimally invasive surgical procedure, there are patients with persistent pain and functional impairment. Traditional Chinese Medicine (TCM) has a treatment called Tuina, which was found to be effective in conservative management of LDH. The high-quality evidence of its combined use during the perioperative period of PELD is not available. Objective: The main aim of the trial will be to determine the clinical effectiveness and safety of the Zheng’s "Gu cuo feng, jin chu cao"manipulative therapy when used as the adjunctive to percutaneous endoscopic lumbar discectomy (PELD) of single-level LDH. Methods: The study is a protocol of a randomized controlled superiority trial in a multicenter parallel-group trial to be carried out in 4 Chinese hospitals. There will be 220 eligible patients with single-level LDH randomly assigned by means of 1:1 assignment to the experimental group (it will receive Zheng’s Tuina manipulative therapy before and after the PELD) and to the control group (it will receive PELD only). The main outcome is change in the baseline scores of the Oswestry Disability Index (ODI) score in 3 months after surgery. Visual Analogue Scale (VAS) of the pain, SF-12 health survey, and modified Macnab criteria are classed as the secondary outcomes. The outcome assessors and data analysts will not know the group allocation. Results: - Conclusions: The trial will be rigorous in offering evidence concerning the integration of Chinese and Western medicine in the treatment of LDH. This combination methodology might help in achieving functional recovery, better pain management, and creating a new standard of perioperative rehabilitation among this patient group in case it was proven effective. Clinical Trial: ITMCTR2025001254. Registered on International Traditional Medicine Clinical Trial Registry, 2025.
Background: Mental health disorders (MHDs) represent a growing global challenge and pose a significant risk to public health. Alongside developments in the field of large language models (LLMs), conve...
Background: Mental health disorders (MHDs) represent a growing global challenge and pose a significant risk to public health. Alongside developments in the field of large language models (LLMs), conversational mental health chatbots (CMHBs) have emerged and are increasingly being used by individuals in self-directed and independent ways to provide therapy and therapeutic support. While users’ perspectives on the use of CMHBs have been extensively examined and systematically synthesized, relatively little research has focused on how healthcare professionals (HCPs) perceive these tools. To develop a holistic understanding of the implications of CMHB use – including potential benefits, risks, and implementation barriers – it is essential to consider the perspectives of HCPs, who bring clinical expertise and psychological knowledge to the evaluation of mental health interventions. Accordingly, the objective of this review is to synthesize empirical evidence on HCPs’ perspectives regarding the use of CMHBs and to explore potential convergences and divergences between professional and user perspectives. Objective: This paper presents the protocol for a systematic review that aims to identify, synthesize, and critically appraise evidence on healthcare professionals’ perspectives regarding the use of CMHBs as tools for therapy or therapeutic support for individuals with MHDs, and to examine perceived benefits, barriers, and potential ethical concerns associated with their use. Methods: A systematic review of literature will be conducted in accordance with the PRISMA 2020 guidelines. Peer-reviewed qualitative, quantitative, and mixed-methods studies will be identified through searches of PubMed/MEDLINE, PsycINFO, Embase, CINAHL, Web of Science, and Scopus, with no restrictions on publication date. Study screening will be supported by AI-assisted active learning using ASReview, following the SAFE stopping procedure, with independent quality-assurance screening by a second reviewer. Data will be synthesized narratively, and methodological quality will be appraised using the Critical Appraisal Skills Programme (CASP) checklist. Results: The database search for this review was performed end of November 2025. The initial Title/abstract screening started in January 2026 and is currently underway. Data extraction is expected to be completed by April, and the final results are expected to be published by August 2026. Conclusions: In light of the rapid emergence of AI-driven chatbots in mental health care, this systematic review will synthesize current empirical evidence to address the urgent need to understand HCPs’ perspectives on the use of CMHBs. Specifically, it will examine how HCPs perceive CMHBs when used to simulate therapeutic interactions, as adjunctive support to conventional therapy, or as potential substitutes for specific therapeutic functions. By identifying perceived benefits, barriers, and ethical concerns, this review aims to contribute to a more comprehensive understanding of the implementation and broader implications of CMHBs in mental health care.
Background: Poor sleep quality is increasingly recognized as a contributor to cardiovascular health and stroke risk. Individuals with diabetes, hypertension, obesity, and heart disease are particularl...
Background: Poor sleep quality is increasingly recognized as a contributor to cardiovascular health and stroke risk. Individuals with diabetes, hypertension, obesity, and heart disease are particularly vulnerable, yet the specific influence of sleep characteristics on this high-risk group remains insufficiently understood. Most previous studies have focused on either sleep duration or insomnia alone, with limited evidence integrating multiple sleep dimensions in adults at elevated risk of stroke, particularly in low- and middle-income settings. Objective: This study aimed to examine multidimensional sleep characteristics and their associations with demographic, behavioral, and cardiometabolic risk factors among adults at high risk of stroke, as well as to identify discrepancies between subjective sleep perception and objective sleep indicators. Methods: This cross-sectional study examined sleep characteristics 303 adults at high risk of stroke with established stroke risk factors. Measures included subjective sleep quality, sleep duration, efficiency, disturbances, use of sleep medication, and daytime dysfunction. Associations with demographic factors, lifestyle behaviors, and comorbidities were analyzed using descriptive statistics and chi-square tests. This study explored multidimensional sleep profiles in relation to cardiometabolic and behavioral risk factors. Results: Among 303 adults at high risk of stroke, 65.7% (n = 199) had poor sleep quality. Objective sleep impairment was common, with over half exhibiting low sleep efficiency (<65%) and 26.4% (n = 80) reporting sleep duration <5 hours. Poor sleep quality was significantly associated with cardiometabolic comorbidities, male sex, smoking, irregular sleep patterns, and family history of cardiovascular disease (all p < 0.001), with effect estimates supported by Confidence Intervals (95% CI). Conclusions: Sleep disturbances are common among individuals at elevated stroke risk and are shaped by demographic, behavioral, and clinical factors. Although most participants perceived their sleep as adequate, objective indicators revealed marked impairment in sleep duration and efficiency Poor sleep quality is closely associated with cardiometabolic comorbidity and may contribute to increased cerebrovascular vulnerability. Routine sleep assessment, early identification of sleep disorders, and targeted interventions—such as sleep hygiene education and screening for obstructive sleep apnea—are essential for stroke prevention. Further longitudinal research is needed to clarify causal pathways and assess the effectiveness of sleep-focused prevention strategies. Longitudinal studies are warranted to clarify causal pathways and to evaluate the impact of sleep-focused interventions on stroke risk in high-risk populations.
Background: Health systems increasingly deploy large language models (LLMs) to draft patient-facing messages, including patient portal replies and follow-up communications. While these tools may impro...
Background: Health systems increasingly deploy large language models (LLMs) to draft patient-facing messages, including patient portal replies and follow-up communications. While these tools may improve efficiency, safety failures often arise not from obvious factual errors but from how content is framed—diagnostic language that exceeds clinical scope, false certainty that minimizes legitimate concerns, or fabricated evidence presented as authoritative. These language-level risks remain poorly characterized and are not routinely addressed within clinical governance workflows. Objective: This study aimed to estimate the prevalence and types of language-level safety risks in AI-generated patient-facing messages and to assess the feasibility of a structured, clinician-led governance approach for identifying and acting on these risks prior to message delivery. Methods: We conducted a single-reviewer simulation feasibility study evaluating 200 AI-generated patient-facing messages representative of common patient portal and follow-up communication scenarios. Messages were generated using GPT-4 (OpenAI) and evaluated using the SAFE-AI Message Guard framework, a clinician-informed operational governance model for identifying language-level safety risks across four domains: (1) clinical scope violations involving non-delegable diagnostic determinations, (2) overconfidence or false reassurance through absolutist language, (3) hallucinated specifics including fabricated guidelines, statistics, or citations, and (4) bias, minimization, or ethical concerns. Messages could receive multiple flags across domains. A board-certified psychiatric-mental health nurse practitioner (PMHNP-BC) assigned severity classifications (high: block or mandatory rewrite required; medium: clinician review recommended; low: log for monitoring only) and recommended workflow actions for each flagged message. This study used only simulated AI-generated messages; no human subjects or protected health information were involved. Results: Of 200 messages evaluated, 102 (51.0%) received at least one language-level risk flag. At the message level, 80 messages (40.0%) were classified as high severity, requiring blocking or mandatory rewrite before patient delivery. Workflow actions were distributed as follows: 20 messages (10.0%) blocked, 20 (10.0%) required mandatory rewrite, 11 (5.5%) recommended for clinician review, and 149 (74.5%) allowed to proceed. At the flag level, 126 total risk flags were assigned across the 102 flagged messages (mean 1.24 flags per flagged message). By message-level category presence, overconfidence/false reassurance was most frequent (24 messages), followed by scope violations (20), hallucinated specifics (16), and bias/ethical risk (3). By flag-level severity, 80 flags (63.5%) were high severity and 46 (36.5%) were medium severity; no low-severity flags were assigned. Conclusions: Language-level safety risk in AI-generated patient-facing messages is frequent and clinically meaningful, affecting more than half of evaluated messages. A structured, clinician-defined governance framework can feasibly identify scope violations, overconfidence, and hallucinated content, providing an auditable mechanism to reduce the likelihood of unsafe messages reaching patients. Health systems deploying generative AI for patient communication should incorporate language-level safety evaluation into governance workflows. Multi-reviewer validation studies and development of automated detection methods are needed before operational deployment at scale.
Background: The cognitive paradigm in medical education is undergoing a transition from traditional knowledge transmission to learner-centered knowledge construction. In China, this shift is aligned w...
Background: The cognitive paradigm in medical education is undergoing a transition from traditional knowledge transmission to learner-centered knowledge construction. In China, this shift is aligned with the Outline of the Plan for the Construction of China into an Education Powerhouse (2024-2035), which mandates high-quality, intrinsic development in nursing curricula. While Constructivist Learning Theory (CLT)–based teaching methods (eg, PBL, CBL, and situational simulation) have been widely explored across Chinese nursing institutions, the evidentiary base remains geographically fragmented and methodologically heterogeneous. A systematic synthesis is required to inform national evidence-based educational reforms. Objective: This protocol describes a systematic review and meta-analysis designed to evaluate the effectiveness of CLT-based teaching methods versus traditional lecture-based models on Chinese nursing students’ theoretical knowledge, practical skills, self-directed learning ability, and critical thinking disposition. Methods: A comprehensive systematic search will be conducted across nine electronic databases: PubMed, Web of Science, the Cochrane Library, Embase, CINAHL, China National Knowledge Infrastructure (CNKI), Wanfang Data, VIP Database (Chinese Scientific and Technological Journal Database), and China Biology Medicine (CBM). The search period spans from database inception to September 27, 2025, with an update scheduled for May 31, 2026. Randomized controlled trials and quasi-experimental studies involving Chinese nursing students will be included. Two independent reviewers will screen titles and abstracts, perform full-text retrieval, and extract data using standardized forms. Risk of bias will be assessed using the Cochrane Risk of Bias tool 2 (RoB 2) for randomized trials and the Joanna Briggs Institute (JBI) critical appraisal tools for quasi-experimental studies. Meta-analysis will be performed using Review Manager (RevMan) 5.4 and Stata 18.0, employing random-effects models and subgroup analyses based on educational level (eg, vocational vs. undergraduate) and intervention type. Results: This protocol was finalized in February 2026. A preliminary systematic search was conducted on September 27, 2025, identifying 990 records prior to deduplication. As of February 6, 2026, deduplication has been completed and title/abstract screening is underway. Full-text retrieval is expected to be completed by June 2026, and data extraction and risk-of-bias assessment are expected to be completed by July 2026. The final results manuscript is targeted for submission in September 2026. Conclusions: This review will provide a robust evidentiary foundation for the strategic deployment of constructivist methodologies in Chinese nursing education, specifically addressing the needs of vocational and undergraduate programs in the era of digital transformation. Clinical Trial: PROSPERO CRD420251159499
Background: Multicomponent supervised exercise programs have demonstrated efficacy in improving physical performance and mitigating frailty in older adults, especially when adapted to functional capac...
Background: Multicomponent supervised exercise programs have demonstrated efficacy in improving physical performance and mitigating frailty in older adults, especially when adapted to functional capacity. However, evidence remains limited regarding their effects in community-dwelling frail and pre-frail individuals in Brazil. Objective: This protocol study aimed to evaluate the effects of a 12-week multicomponent supervised exercise program on frailty status, functional capacity, clinical-functional vulnerability, and fall risk in frail and pre-frail community-dwelling older people. Methods: This protocol describes the methodology of a single-blind, randomized controlled trial, which will assess a total of 60 participants aged 60 years and older that will be recruited from a community senior center in Rio Verde, Brazil, and randomly allocated to an intervention group (multicomponent supervised exercise based on the VIVIFRAIL model) or to a control group (educational workshops on healthy aging). The primary outcomes will be functional capacity (6-Minute Walk Test) and fall risk (Timed Up and Go Test), but covariates will include clinical-functional vulnerability (CFVI-20), cognitive status (MMSE), depressive symptoms (GDS-15), physical activity level (IPAQ), muscular mass (calf circumference), and fear of falling (FES-I). Assessments will be conducted at baseline and post-intervention. Results: This protocol will provide evidence regarding the effectiveness of a supervised multicomponent exercise program for improving frailty status, functional outcomes, and fall-related risk in a vulnerable population of Brazilian elderly. Conclusions: If effective, the intervention may offer a scalable, low-cost, and culturally appropriate strategy to promote healthy aging and reduce physical decline in community settings vulnerable subgroups. Clinical Trial: Brazilian Registry of Clinical Trials RBR-9zvtc5b; https://ensaiosclinicos.gov.br/rg/RBR-9zvtc5b
Background: With the acceleration of global aging and rising demand for orthopedic surgeries, Enhanced Recovery After Surgery (ERAS) protocols have shortened hospital stays but created a "transitional...
Background: With the acceleration of global aging and rising demand for orthopedic surgeries, Enhanced Recovery After Surgery (ERAS) protocols have shortened hospital stays but created a "transitional care gap," shifting complex rehabilitation tasks to the home setting. While artificial intelligence (AI) offers potential solutions, patient perceptions regarding its role—ranging from informational chatbots to functional monitoring systems—remain underexplored. Objective: This study aims to map the evolution of care needs from hospital to home recovery, and to identify specific preferences and independent predictors of AI acceptance in orthopedic transitional care. Methods: A multicenter, cross-sectional survey was conducted with orthopedic patients across 33 hospitals in Guangdong, China. A total of 860 questionnaires were initially collected, and 752 valid responses were included in the final analysis after strict quality control (excluding response duration ≤ 180s). Data were collected on demographics, evolving task priorities across the care continuum, and perceived challenges based on an extended Technology Acceptance Model (TAM). The structure of perceived challenges was validated using Exploratory Factor Analysis (EFA). Descriptive mapping and multivariable logistic regression were performed to identify the "evolving preferences" and independent determinants of the willingness to use AI assistants. Results: Overall willingness to use AI was high (604/752, 80.3%). Patient priorities exhibited a fundamental shift from "passive compliance" (e.g., pain management, understanding instructions) during hospitalization to "active safety assurance" (e.g., fall prevention, motion correction) in the home setting. EFA identified 3 distinct challenge dimensions: Home Rehabilitation Self-Management Barriers, Lack of Professional Support, and Symptom Uncertainty. In multivariate analysis, significant predictors of AI acceptance included presence of comorbidities (adjusted Odds Ratio [aOR] 1.72, 95% CI 1.09–2.69), older age (aOR 1.02, 95% CI 1.00–1.03), and progression to later rehabilitation stages (aOR 1.28, 95% CI 1.01–1.62). Conclusions: The transition from hospital to home involves a fundamental shift in patient needs from information acquisition to functional safety assurance. AI acceptance in this context is driven by a "Vulnerability Hypothesis," where older and clinically vulnerable patients actively seek digital support to overcome physical execution barriers. However, widespread adoption is currently constrained by a digital divide related to geography and family support. To be clinically effective, future orthopedic AI systems must move beyond generic chatbots to become "Hybrid Coaches"—integrating computer vision and sensor technology to provide real-time motion correction and fall prevention—thereby addressing the specific "Action Gap" that defines the transitional care period. Clinical Trial: This study is not a clinical trial, so trial registration is not required.
Background: Preeclampsia is a leading cause of maternal and perinatal morbidity and mortality worldwide. ABO blood group phenotypes have been associated with thrombosis, endothelial dysfunction, and i...
Background: Preeclampsia is a leading cause of maternal and perinatal morbidity and mortality worldwide. ABO blood group phenotypes have been associated with thrombosis, endothelial dysfunction, and inflammation, which are key mechanisms implicated in the pathogenesis of preeclampsia. Observational studies have reported inconsistent associations between maternal ABO blood group and preeclampsia risk. Since the last comprehensive meta-analysis in 2021, several new studies have been published. Therefore, an updated systematic review and meta-analysis is warranted to provide robust and up-to-date evidence on this association. Objective: This study aims to systematically review and quantitatively synthesize the available evidence on the association between maternal ABO blood group and the risk of preeclampsia. Methods: This protocol follows the PRISMA-P 2015 guidelines and has been prospectively registered in the Open Science Framework (OSF) under the identifier 10.17605/OSF.IO/E3KTG (https://osf.io/zm4nf). Observational studies (case-control, cohort, and cross-sectional) reporting maternal ABO blood group and preeclampsia outcomes will be included. A systematic search will be conducted in PubMed/Medline, Embase, Scopus, Web of Science, and the Cochrane Library for studies published from January 2000 to October 31, 2025. Grey literature sources including Google Scholar, ProQuest Dissertations, and conference abstracts will also be searched. Two independent reviewers will perform study selection, data extraction, and risk of bias assessment using the Newcastle-Ottawa Scale. A random-effects meta-analysis will be performed to pool odds ratios for each blood group, and heterogeneity will be assessed using Cochran’s Q and I² statistics. Subgroup and sensitivity analyses will be conducted, and publication bias will be evaluated using funnel plots, Egger’s test, and trim-and-fill method. The certainty of evidence will be assessed using the GRADE approach. Results: The literature search, study selection, and data extraction are expected to be completed by December 2025. We anticipate identifying and synthesizing data from recently published observational studies in addition to previously included studies, thereby increasing the overall sample size and statistical power compared with prior meta-analyses. The meta-analysis will provide pooled effect estimates for each ABO blood group in relation to preeclampsia risk and assess the robustness and certainty of the evidence. Conclusions: This updated systematic review and meta-analysis will provide comprehensive and current evidence regarding the association between maternal ABO blood group and preeclampsia risk. The findings may clarify inconsistencies in the literature and determine whether ABO blood group could serve as a potential risk marker for preeclampsia, thereby informing future research and clinical decision-making. Clinical Trial: OSF 10.17605/OSF.IO/E3KTG
Background: Chimeric antigen receptor (CAR) therapy is a novel cell editing technology and innovative form of cancer immunotherapy. An individual’s immune cells (T-cells) are removed from the body,...
Background: Chimeric antigen receptor (CAR) therapy is a novel cell editing technology and innovative form of cancer immunotherapy. An individual’s immune cells (T-cells) are removed from the body, engineered to target and limit the growth of cancer cells, and reinfused into the patient’s body. The one-time treatment is expensive ($500,000 plus hospital costs), and requires specialized care to treat and manage the associated side effects, such as cytokine release syndrome (CRS), and other serious health issues including cognitive confusion, infertility, secondary malignancies, and compromised long term quality of life. At the same time, CAR T has been highly successful for patients with advanced blood cancers and no remaining treatment options. The CAR T landscape is changing rapidly, and product approvals have outpaced the capacity for researchers to collect long term evidence related to survival or predictive biomarkers that might better prioritize patients. Because CAR T is offered exclusively in urban cancer centres with access to cell manufacturing capacity, equitable access has been challenging. At the same time there is considerable demand and social hype about CAR T as a cancer cure despite the risks and uncertainty of the technology. Objective: We aimed to determine the dominant perspectives and nature of the information on CAR T-cell therapy available to the public in the online environment. Methods: In this qualitative study, we conducted a comprehensive search of websites including professional, medical, corporate, health-based, news media, and blogs to capture the diversity of online sources and their perspectives presenting information on CAR T-cell therapy. Fifty-one webpages met the study criteria and comprised the data set in this review. The content of the sites was reviewed and analyzed using a critical and interpretive descriptive lens. Results: We classified the website information into four dominant major themes characterizing CAR T-cell therapy: 1) patient stories of success, magic and hope; 2) medical science explainers; 3) economic perspectives; and 4) ethical discussions and complex arguments. With the exception of the sites that presented ethical discussions and complex information, the online environment positioned CAR T as revolutionary, curative, and the future of cancer treatment. Side effects were generally minimized, and collective dilemmas such as sustainability for the healthcare system, equitable access, and issues of prioritization were frequently sidelined or absent. Conclusions: The persuasive tone of online CAR T information combined with the increasingly blurred distinctions between research and care in genetic medical technologies suggests that obtaining informed consent or refusal may place too much onus on individual patients. In an evolving technological landscape such as CAR T, determining the acceptable risks and benefits is a question that ethically requires broader, as well as more inclusive, societal deliberation.
Background: “Empathy” is widely discussed in health and care settings and is increasingly claimed as an attribute of AI (artificial intelligence) systems (e.g., socially assistive robots, chatbots...
Background: “Empathy” is widely discussed in health and care settings and is increasingly claimed as an attribute of AI (artificial intelligence) systems (e.g., socially assistive robots, chatbots), but the term is used inconsistently across the literature. In research on AI in these settings, it is often unclear what authors mean by “empathic AI”, what systems do that is intended to be empathic, and how empathy is assessed. This matters because perceived empathy can shape users’ experience of AI-mediated support and their willingness to engage with these systems. Objective: To map how empathy is defined, operationalised, and evaluated in peer-reviewed AI research in health and care settings, and to identify recurring design features associated with higher perceived empathy. Methods: This protocol outlines a scoping review following Joanna Briggs Institute (JBI) guidance and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). We use “AI” as an umbrella term and will extract and classify each system’s type (e.g., rule-based or large language model–based). We will search PubMed (MEDLINE), Embase, PsycINFO, CINAHL, Scopus, IEEE Xplore, and the ACM Digital Library. Two reviewers will screen titles/abstracts (ASReview) and full texts (Rayyan). We will extract study characteristics, empathy definitions/framing, empathy-related system behaviours/design features, and evaluation methods, and synthesise findings thematically. Results: The review will produce (1) a summary of how empathy is defined in AI research in health and care settings, (2) a grouped list of the main empathic behaviours and design features described, and (3) an overview of how empathy is measured across studies. Where studies report empathy ratings, we will summarise which features are most commonly present in higher-rated systems within comparable contexts. Conclusions: The review will provide a clearer picture of what researchers mean by “AI empathy” in health and care settings and what system features are most commonly used when trying to build it. These findings may help guide the development of more empathic AI systems.
Background: Emerging adults (EAs), typically ranging from late adolescence into mid- to late-twenties, navigate a transitional period marked by rapid developmental, social, and psychological change. D...
Background: Emerging adults (EAs), typically ranging from late adolescence into mid- to late-twenties, navigate a transitional period marked by rapid developmental, social, and psychological change. Despite heightened vulnerability to mental health concerns during this stage, service systems are often fragmented, with gaps between adolescent and adult care streams that leave many EAs without developmentally appropriate support. In response, developing approaches such as transdiagnostic stratification, which structures care around shared symptom processes and informs treatment intensity, and digital measurement-based care (dMBC), based on routine patient-reported outcome measures (PROMs), have gained traction but remain challenging to implement consistently. This reinforces the need for Rapid Learning Health System (RLHS) approaches that leverage continuous data and feedback for ongoing improvement, as well as co-design methods that meaningfully integrate EA perspectives into service improvement. Objective: This research protocol outlines a co-design study situated within an RLHS to develop practical strategies and resources to support the sustained implementation of dMBC within EA mental health services. Anticipated outputs include clinician-facing workflow supports, guidance for using client-reported data in clinical decision-making, EA-oriented materials to support engagement with measures, and implementation planning resources to support uptake across the care pathway. Each co-designed output will be developed to function across core stages of care, including intake, treatment decision-making, therapy, and discharge. Methods: A concurrent multi-methods design will be employed, integrating quantitative and qualitative approaches within a dual methodological framework combining User-Centered Design with Participatory Design to structure the co-design process and guide the development of implementation outputs. The process will center the perspectives of EAs accessing services and clinical staff to actively collaborate in informing the development and refinement of study outputs. Results: As the study is underway, findings will be reported upon study completion. Conclusions: This study is expected to demonstrate the value of integrating co-design within an RLHS to advance more responsive, contextually grounded dMBC implementation in EA mental health care, while also contributing insights that can strengthen future co-design efforts with this population.
Background: Despite recent declines in unintended teen pregnancies attributed to family planning services, socioeconomically disadvantaged and highly mobile youth (HMY)—those experiencing frequent r...
Background: Despite recent declines in unintended teen pregnancies attributed to family planning services, socioeconomically disadvantaged and highly mobile youth (HMY)—those experiencing frequent residential transitions—remain disproportionately at risk. Traditional teen pregnancy prevention (TPP) programs often fail to effectively engage these youth due to their unstable life circumstances and limited access to conventional prevention resources. Objective: This paper describes the design, usability testing, and key lessons learned from the development of gamification elements involving interactive narratives for "My Future-Self (MFS)," an innovative, hybrid intervention tailored specifically for HMY. This manuscript highlights the experience of interdisciplinary collaboration in the development of gamified elements of behavioral change interventions. Methods: We employed a User-Centered Design (UCD) framework, emphasizing iterative collaboration among adolescent unintended pregnancy prevention intervention scientists, game designers, and HMY advisors (N=96; mean age=19.86, SD=1.41). Initial surveys assessed HMY’s game aesthetics preferences, technology access, intimate relationships, and specific life experience with medical professionals and use of contraception in order to guide prototype development of gamified interactive content. Two 10 minute gaming activities were developed; one around visiting a physician office to discuss contraception and one using scenarios to practice healthy communication with an intimate partner. Iterative usability testing involved structured playtesting sessions with 12 youth with HMY experience utilizing think-aloud protocols, semi-structured interviews, and thematic feedback analysis. Throughout development, distinct goals representing a) intervention developers (i.e. contributions to behavior change) and b) game designer/producers (i.e. user engagement) were clarified, aligned, and operationalized to optimize gaming elements’ contribution to the small group intervention’s behavior change effectiveness. Results: Playtesting revealed high user appreciation for realistic and immersive scenarios; however, feedback also underscored the necessity for clearer context and increased user agency within the intimate partner gaming element. Iterative refinements resolved usability barriers and significantly enhanced gaming elements’ acceptability. Key lessons learned included the critical importance of clearly defining and aligning interdisciplinary goals early in the design process, positioning intervention scientists as lead designers, adapting gamified interventions to realistic user-engagement expectations, and proactively integrating cultural relevance throughout inclusive content. Conclusions: Explicitly addressing interventionists’ and game designers’ distinct goals was crucial to achieving successful interdisciplinary alignment. Employing a collaborative, iterative UCD approach significantly strengthened interdisciplinary understanding of the gaming elements’ purpose, enhancing the design relevance and usability of the MFS gamified intervention for HMY. The identified lessons learned provide valuable insights for future development and production of gamified health interventions through the partnering of intervention developers with game designers and end users of resultant intervention program.
This case study examined the feasibility of using consumer-grade wearable devices for longitudinal sleep tracking and explored how changes in sleep patterns relate to balance performance. Two college...
This case study examined the feasibility of using consumer-grade wearable devices for longitudinal sleep tracking and explored how changes in sleep patterns relate to balance performance. Two college students participated over four months: Participant 1 (P1) used an Apple Watch Series 5 and an OURA ring; Participant 2 (P2) used a Fitbit Charge 5 and an OURA ring. Participants were assigned different wearable devices to assess device-specific feasibility and variability in sleep-tracking accuracy. Sleep data were collected continuously, including time in bed (TIB), total sleep time, sleep efficiency, wake after sleep onset (WASO), sleep stages, and sleep onset timing. Both participants also completed a daily sleep diary and underwent monthly balance assessments using the Bertec® Balance Advantage Sensory Organization Test (SOT). Wearables showed varying accuracy in estimating TIB: the OURA ring overestimated TIB by 15–22 minutes, the Fitbit by 27 minutes, while the Apple Watch slightly underestimated it by 9 minutes. Excellent agreement was observed in sleep duration estimates between the OURA ring and Apple Watch (ICC=0.97) and between the OURA and Fitbit (ICC=0.99), but agreement was lower for WASO, deep sleep, and sleep efficiency. Sleep variability appeared to influence balance outcomes. Fluctuations in sleep timing and duration corresponded to changes in SOT visual subscale scores, suggesting increased postural sway with irregular sleep patterns. Missing data rates were acceptable, ranging from 0–25% across devices. For P1, missingness was highest for the OURA (25%) and Fitbit (20.3%), but zero for the sleep diary. For P2, the Apple Watch had a 14.1% missing rate, the OURA 9.4%, and the sleep diary 6.25%. In conclusion, all tested wearables demonstrated feasibility for long-term sleep monitoring, though measurement discrepancies highlight the need to align device choice with research goals. Variations in sleep consistency may affect postural stability, reinforcing the importance of accurate, continuous sleep tracking in balance research. Due to the small sample size, findings are illustrative and not generalizable.
Background: Artificial intelligence (AI) is an increasingly prominent feature of contemporary healthcare, with medical AI systems beginning to support diagnostic and therapeutic processes in many clin...
Background: Artificial intelligence (AI) is an increasingly prominent feature of contemporary healthcare, with medical AI systems beginning to support diagnostic and therapeutic processes in many clinical domains. Alongside the anticipated benefits of these technologies, their introduction also raises broader questions about how clinical work and professional roles may change. In particular, medical AI systems may affect physician autonomy, a key factor influencing the acceptance and long-term implementation of new medical technologies. Objective: The aim of this study was to develop and pretest a semi-structured interview guide concerning the potential effects of medical AI systems on physician autonomy. Methods: The interview guide was theoretically grounded in a seven-component model of physician autonomy proposed by Schulz and Harrison. Semi-structured qualitative interviews were conducted with a sample of seven hospital physicians. Interview recordings were transcribed and analyzed using a hybrid inductive–deductive thematic approach: themes were first identified inductively from participant responses and subsequently mapped onto the seven-component model of physician autonomy proposed by Schulz and Harrison. Data were analyzed to assess both the potential effects of medical AI systems on physician autonomy and the methodological adequacy of the interview guide. Results: Most participants did not express strong concerns about losing clinical autonomy through the introduction of AI systems. However, several autonomy-related risks were identified, including potential deskilling, automation bias, limited system explainability, and increasing economic or cost-related pressures. Participants emphasized that AI should serve as a supportive tool rather than a substitute for physician judgment. All physicians agreed that AI systems should not replace clinicians as primary clinical decision-makers. Conclusions: Medical AI was largely viewed as compatible with physician autonomy, yet participants highlighted important risks that warrant attention in future research and system design. Our findings suggest that autonomy-related concerns extend beyond direct loss of decision-making authority and include broader professional, cognitive, and organizational dimensions. However, our inductively identified themes and subthemes did not fully reflect all components of physician autonomy, indicating the need for further refinement of how to assess physician autonomy in qualitative research.
Formative evaluation is widely used in implementation science to anticipate barriers and facilitators prior to the deployment of health technologies, typically relying on stakeholders’ reported beli...
Formative evaluation is widely used in implementation science to anticipate barriers and facilitators prior to the deployment of health technologies, typically relying on stakeholders’ reported beliefs collected before real-world exposure. This approach has proved informative for many digital health tools, but its application to immersive and embodied technologies such as extended reality (XR) warrants closer scrutiny. XR interventions delivered through head-mounted displays depend on spatial perception and sensorimotor engagement, meaning that implementation-relevant properties, including comfort, perceived intrusiveness, safety, and workflow disruption, often become apparent only through direct interaction. At the same time, large segments of the healthcare workforce remain XR-naïve, such that pre-use judgements are frequently shaped by anticipation rather than experience. Drawing on literature from implementation science, grounded cognition, and human–computer interaction, this viewpoint argues that perception-based formative evaluation, when applied through frameworks developed for screen-based technologies, is vulnerable to misclassifying barriers and facilitators in XR adoption. Rather than questioning formative evaluation as a methodological approach, we identify a boundary condition for its interpretability in experience-dependent technologies and propose a pragmatic refinement: incorporating brief experiential familiarisation before eliciting stakeholder perceptions to strengthen early-stage assessment and improve alignment with real-world implementation decisions.
Background: Large language models (LLMs) are increasingly used to extract information from electronic health records (EHRs). Given the rapid pace of LLM development, robust scenario-specific benchmark...
Background: Large language models (LLMs) are increasingly used to extract information from electronic health records (EHRs). Given the rapid pace of LLM development, robust scenario-specific benchmarks are essential to evaluate clinical usefulness and support safe deployment. Objective: To compare contemporary LLMs on structured data extraction from real neurosurgical EHRs written in the Czech language. Methods: In a prospective single-center cohort, 172 hospitalized patients provided informed consent for use of anonymized EHRs. For each patient, predefined records were collected and concatenated. Ground truth for 35 data points was established by dual extraction with consensus. A standardized prompt requesting JSON output was submitted to 19 LLMs. Primary outcome was overall accuracy; secondary outcomes were category-level accuracy and proportion of complete machine-readable outputs. Results: 6,264 documents were collected (median 33 per patient). Ground truth was established with 92.6% initial inter-rater agreement before consensus seeking. Several models produced complete JSON outputs for 100% of cases (Claude 4.1 Opus, Grok 4, Gemini 2.5 Flash); GPT-4.1 (DeepSearch) and GPT-5 completed 99.4%. Highest accuracy was achieved by GPT-4.1 (87.6%), followed by GPT-4.5 (85.6%), Claude 4.1 (84.8%), and Grok 4 (84.2%). Accuracy declined by data type: binary (up to 95%), numeric (~89%), multiple-choice (~75%), and short text (~78%). Conclusions: Currently available LLMs can reliably extract structured clinical information from full, non-English EHRs, while older or smaller models show major limitations. A hybrid workflow—automated extraction with targeted validation—appears practical for research use.
Background: Despite the effectiveness of bariatric surgery in the treatment of severe obesity, a substantial proportion of patients experience insufficient weight loss or weight regain over time. Evid...
Background: Despite the effectiveness of bariatric surgery in the treatment of severe obesity, a substantial proportion of patients experience insufficient weight loss or weight regain over time. Evidence indicates that behavioral factors and mental health conditions play a central role in these outcomes, representing strategic targets for educational and technology-based self-monitoring interventions. Objective: This study aimed to develop and validate the content of a mobile application designed to support patients in mental health self-monitoring and to encourage behavioral changes, with the goal of improving surgical outcomes, preventing weight regain, and promoting long-term psychological well-being. Methods: This was a formative research study focused on the development and content validation of an educational digital health intervention, conducted according to the Systematic Instructional Design model, encompassing the analysis, design/development, and validation phases. Content validation was performed by an expert committee based on Pasquali’s criteria. Interrater agreement was quantitatively assessed using the Content Validity Index (CVI), considering the domains of clarity and relevance. Results: The application was developed with 11 screens and integrates validated psychometric instruments for self-monitoring of major mental health conditions, including the Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), Modified Yale Food Addiction Scale 2.0 (mYFAS 2.0), and Alcohol Use Disorders Identification Test (AUDIT). In addition, the platform includes body weight monitoring, physical activity tracking, and access to educational content on healthy eating and mental health. The app was designed based on scientific evidence and incorporates motivational strategies such as goal setting, automated alerts, and encouragement of multidisciplinary follow-up. All screens achieved full agreement regarding relevance. One screen did not reach the minimum clarity threshold in the first evaluation round and was subsequently revised. In the second round, it achieved 92.3% clarity and 100% relevance. Conclusions: The findings indicate that the developed application demonstrates adequate content validity and represents a promising digital tool to support postoperative care for patients undergoing bariatric surgery by enabling self-monitoring of key mental health conditions and promoting behavioral strategies aimed at preventing weight regain. Clinical Trial: This study did not constitute a clinical trial.
Background: Many countries face challenges in youth mental health, including stigma around help-seeking, limited accessibility to services, and undersupply of trained professionals. Online peer suppor...
Background: Many countries face challenges in youth mental health, including stigma around help-seeking, limited accessibility to services, and undersupply of trained professionals. Online peer support platforms show promise in addressing these barriers. A government-supported platform called let’s talk launched in Singapore in 2022 to support youth aged 17-35. The anonymous and moderated forum allows youth to discuss mental health topics and life challenges with peers, peer supporters, and professionals. Objective: The objectives are to (1) describe the design framework for let’s talk, including its Theory of Change; (2) conduct a process evaluation on data collected over the first three years of operation; and (3) summarize and discuss our learnings. The findings provide a replicable framework that may guide the design of similar platforms and inform impact evaluation studies. Methods: Key features of let’s talk include co-development with youths and mental health professionals, anonymity, trust and safety through moderation and government endorsement, and five dedicated pathways for key user journeys. Most notably, the Ask-A-Therapist pathway provides access to professional support on the forum and the Peer Supporter pathway trains and empowers youths to provide meaningful support to their peers. Process evaluation data were collected from 1 July 2022 to 30 June 2025. We analyzed platform-wide and feature-specific reach, engagement, and growth metrics by year. We documented learnings from implementation described by the platform development team. Results: In its first three years, let’s talk received an estimated 51,636 non-bounced (meaningful activity) visitors, representing 5.2% of the platform’s target population of 17–35-year-olds in Singapore. In total, 17,158 users (33.2% of non-bounced visitors) created an account and 3,489 (20.3%) of those users posted at least once. The most popular feature of the platform was Ask-A-Therapist, which saw 1,548 original (thread-starting) questions posted from 1,037 unique users and a total of 6,865 posts (61.9% of all posting activity). The 156 Peer Supporters were the most active users, representing 0.9% of all registered users yet contributing 2,175 posts (19.6% of all posting activity). The let’s talk features aiming to bridge the online-offline divide and to encourage self-care training were not popularly used. Engagement patterns revealed that professionally-moderated peer support and direct access to professionals were the primary drivers of sustained use, while features promoting self-directed activities had limited uptake. Conclusions: let’s talk achieved meaningful reach (5.2% of target population) and engagement through two key design principles: (1) low-barrier access to professional support via Ask-A-Therapist, and (2) training and empowering peer supporters as highly engaged community leaders. Our findings suggest that the core value proposition of platforms like let’s talk is human connection and expert guidance. Our framework and implementation learnings provide practical guidance for adapting this model to diverse cultural contexts.
Background: Mobile health (mHealth) applications for menstrual cycle and fertility tracking are widely used to support self-monitoring, reproductive planning, and health awareness among women. While t...
Background: Mobile health (mHealth) applications for menstrual cycle and fertility tracking are widely used to support self-monitoring, reproductive planning, and health awareness among women. While these tools promise personalized predictions and convenient access to reproductive health information, concerns persist regarding their clinical accuracy, adaptability to irregular cycles, transparency of algorithms, and real-world user experience. Objective: This structured review aimed to evaluate the features, physiological integration, predictive performance, validation practices, and user-reported outcomes of mobile applications designed for menstrual and fertility tracking, and to contextualize current evidence using COSMIN and ISPOR evaluation frameworks. Methods: A structured narrative review with systematic elements was conducted following the PRISMA-like reporting framework. Literature published between January 2013 and October 2025 was identified through searches of PubMed, EMBASE, Scopus, and Web of Science, supplemented by semantic and citation-based searches in the Semantic Scholar, OpenAlex, and Google Scholar databases. AI-assisted relevance ranking supported the initial screening, followed by an independent human review. Forty studies meeting the predefined eligibility criteria were included in the qualitative synthesis. Owing to the heterogeneity in study designs, outcomes, and validation methods, a quantitative meta-analysis was not performed. Results: Of the 40 included studies, most were observational and relied on self-reported data from predominantly high-income, technology-literate population. Twenty-four applications incorporated physiological inputs, such as basal body temperature, luteinizing hormone measurements, or wearable-derived metrics, whereas others relied primarily on calendar-based predictions. Multiparameter and sensor-augmented approaches generally demonstrate higher agreement with biological or clinical reference standards than calendar-only methods, with reported fertile window prediction accuracies ranging from approximately 85% to 90% under optimal conditions. However, only a small subset of applications has reported formal clinical validation or regulatory clearance. User satisfaction was strongly associated with perceived accuracy, personalization, and usability, whereas inaccurate predictions, particularly among users with irregular cycles, were linked to frustration, anxiety, and high attrition. Conclusions: Menstrual and fertility tracking applications that integrate physiological signals outperform calendar-based approaches in terms of predictive performance; however, robust clinical validation, transparency, and inclusivity remain limited. Reported accuracy metrics should be interpreted cautiously because real-world adherence, irregular cycle patterns, and algorithmic bias substantially affect reliability. These tools are best positioned as decision-support and self-awareness technologies, rather than as autonomous diagnostic instruments. Future evaluations should apply standardized frameworks, such as COSMIN and ISPOR, explicitly communicate uncertainty, and prioritize diverse and irregular cycle populations to ensure equitable and clinically meaningful digital reproductive health solutions.
Young people are among the fastest adopters of digital and AI-enabled mental health tools, yet they remain marginal to the research and design processes that shape these technologies. This Viewpoint e...
Young people are among the fastest adopters of digital and AI-enabled mental health tools, yet they remain marginal to the research and design processes that shape these technologies. This Viewpoint examines a persistent participation gap in digital youth mental health research (DYMH): while co-production and patient and public involvement (PPI) are widely invoked as best practice, youth involvement is frequently superficial, inconsistent, or confined to late-stage consultation. As a result, digital mental health innovations risk misalignment with young people’s lived realities, priorities, and vulnerabilities.
We identify three interrelated drivers of this gap. First, conceptual and linguistic fragmentation obscures what “participation” entails in practice, with terms such as co-design, co-production, user-centred design, and PPI used interchangeably despite reflecting different assumptions about power, influence, and decision-making. Second, participation is often uneven across the research lifecycle, with young people involved in ideation or usability testing but excluded from problem formulation, theory selection, implementation, and evaluation. Third, institutional barriers - including ethics review processes, consent requirements, funding constraints, and adult-centric research norms - systematically limit meaningful youth partnership.
We argue that closing the participation gap is both an ethical imperative and a practical necessity. As digital and generative AI tools increasingly shape how young people understand and manage mental health, youth must be recognised as legitimate co-producers of knowledge rather than passive end users. We call for clearer reporting of participatory models, greater attention to youth influence across the research lifecycle, and structural support to normalise meaningful youth involvement. Without such shifts, DYMH innovation risks being scalable but not safe, credible, or trustworthy.
Background: Substance use disorders account for a significant portion of the disease burden attributed to mental health globally, but measurement remains suboptimal. Studies assessing substance use ty...
Background: Substance use disorders account for a significant portion of the disease burden attributed to mental health globally, but measurement remains suboptimal. Studies assessing substance use typically rely on retrospective recall often over long periods of time. However, the episodic, contextual and event- or time-contingent nature of substance use call into question the validity of these traditional retrospective measurement methods. One method to overcome these limitations is ecological momentary assessment (EMA). EMA methods repeatedly sample participant behaviours and experiences in real time, in the context in which they occur. Objective: This review aimed to systematically identify studies using EMA in substance use measurement, provide a comprehensive overview of the EMA methods used, and to provide a draft framework for reporting and methodological recommendations for future EMA studies in this field. Methods: Studies published between 2018 and 2023 were sourced from PubMed, Medline, Scopus, and PsycINFO via Ovid databases on 31st January 2023 using terms related to EMA, digital phenotyping, passive sensing, daily diary and specific terms for each drug type. Studies that actively or passively assessed thoughts and/or behaviour, in the participants’ natural environment/daily lives, in a repeated manner, at or close to the behaviour of interest (substance use), using either automatic prompts or notifications were included. Studies were included for all populations, any age, in any setting, any study design, including RCTs or experimental designs. This study was preregistered on PROSPERO (CRD42023400418). Results: The search identified 7053 articles of which 858 were reviewed in full, and 273 (n = 70,831 participants) were included and extracted. Most studies were conducted in the United States (80%) and focused on alcohol (78%) and cannabis use (30%) with or without the presence of other substance use. Alcohol and cannabis measurement co-occurred the most in 44 (16%) studies. Psychedelics (2%) were particularly understudied using EMA methods. PCP, bath salts, and inhalants were only measured in one study each. We found limited reporting consistency with respect to compliance, completion windows, attrition rates, survey duration and data collection technologies in EMA substance use studies. Sensing data were measured in a limited number of studies. Conclusions: While EMA is a powerful tool for capturing dynamic behaviours, inconsistencies in reporting and design transparency persist. Improving reporting practices, smart sensing and wearable integration, compliance monitoring alongside expanding EMA to underexplored substances such as psychedelics, will be critical to enhancing data quality and advancing the field.
Background: African American women are among the least physically active demographic groups in the United States and face disproportionate burdens of chronic disease that are preventable through regul...
Background: African American women are among the least physically active demographic groups in the United States and face disproportionate burdens of chronic disease that are preventable through regular physical activity. Researchers are increasingly using mixed methods to better understand the behaviors, beliefs, and contextual factors that shape physical activity in this population. Objective: To identify, examine, and describe the key characteristics of mixed methods study designs used in research on the physical activity practices of African American women published within the past ten years, compare methodological approaches, identify gaps, and offer recommendations for future inquiry. Methods: Following the Joanna Briggs Institute (JBI) methodology for scoping reviews, we will implement a three-step search strategy across seven databases (Academic Search Ultimate, Agricultural & Environmental Science Database, APA PsycInfo, CINAHL Ultimate, PubMed, SocINDEX, and SPORTDiscus). Eligible studies are peer-reviewed, single mixed-methods investigations conducted in the United States that include adults (≥18 years) who identify as non-Hispanic African American/Black women, or samples with ≥50% African American women, with results reported by social classification. Two reviewers will independently screen and extract data with adjudication by a third reviewer as needed. We will chart designs (e.g., convergent, explanatory sequential, exploratory sequential), quantitative and qualitative methods, integration approaches (e.g., merging, connecting, embedding), and evidence of mixing (e.g., transformation, comparison, synthesis). Results will be summarized narratively, tabulated, and visualized in a frequency flow diagram. The process will be documented using a PRISMA 2020 flow diagram. Results: As this is a protocol, no results are reported. The initial search was piloted on February 1, 2026. We anticipate completing study selection, data charting, and synthesis by May 2026, with the completed review submitted in July 2026. Conclusions: Mapping the application of mixed methods in studies of African American women’s physical activity will reveal methodological patterns and gaps, guiding stronger, equity-centered research designs and reporting. Clinical Trial: OSF Registration: https://doi.org/10.17605/OSF.IO/NA9ME
Background: Cardiac myxomas (CMs) are the most common benign primary cardiac tumours, most frequently originating from the left atrium, and less commonly from the right atrium. Despite being histologi...
Background: Cardiac myxomas (CMs) are the most common benign primary cardiac tumours, most frequently originating from the left atrium, and less commonly from the right atrium. Despite being histologically benign, CMs can cause serious thromboembolic complications including stroke, acute coronary syndrome, limb ischemia, and visceral infarction. While previous studies have explored risk factors for thromboembolism, literature comprehensively synthesising the anatomical distribution, clinical patterns, and management of CMs remains limited. Objective: We intend to summarise the published evidence on the frequency, anatomical distribution, clinical presentations, and management implications of thromboembolic events associated with CMs. Methods: A systematic review will be conducted in accordance with PRISMA-P guidelines and registered on PROSPERO. Medline, Embase, and PubMed will be searched for studies reporting thromboembolic complications in patients with histologically or radiologically confirmed CMs. Eligible study designs include case reports, case series, cohort studies, and registries. Two reviewers will independently screen studies and extract data on patient demographics, tumour characteristics, embolic events (type, site, clinical presentation), diagnostics, management, and outcomes. Discrepancies will be resolved through discussion or third-party adjudication. Risk of bias will be assessed using Joanna Briggs Institute tools. Results: The review will summarise reported frequencies and anatomical distribution of embolic events, clinical presentations, associations with tumour characteristics, and management strategies. Case reports will be tabulated individually, while cohort and series data will be aggregated descriptively with quantitative summaries presented where feasible. Conclusions: This review aims to provide a comprehensive synthesis of thromboembolic complications associated with CMs, highlighting patterns, management strategies, and gaps in the current literature. Findings aim to improve clinical recognition, inform clinical management, and guide future research. Clinical Trial: This study is a systematic review and not a clinical trial. The review protocol was prospectively registered with PROSPERO (CRD420261299634).
Synthetic data (SD) has emerged as a promising tool for advancing cardiology research by enabling data access, enhancing patient privacy, and supporting the development of machine learning models. By...
Synthetic data (SD) has emerged as a promising tool for advancing cardiology research by enabling data access, enhancing patient privacy, and supporting the development of machine learning models. By generating artificial patient records that reflect real-world distributions, SD can accelerate clinical research, improve model performance for rare cardiovascular conditions, and facilitate transnational collaborations that would otherwise be restricted by data sharing barriers. Despite these advantages, the increasing use of SD raises important ethical, regulatory, and methodological concerns that remain insufficiently addressed. Key challenges include assessing the validity and generalizability of synthetic datasets, understanding their limitations in representing complex and heterogeneous patient populations, and preventing the amplification of existing biases in cardiovascular care. Regulatory frameworks such as GDPR and HIPAA safeguard privacy but do not fully account for emerging risks such as re-identification or data leakage, leaving uncertainty regarding the use of SD in evidence generation for medical devices or therapeutic evaluation. Technical constraints, including the reliability of generative models and the difficulty of capturing nuanced clinical trajectories, further limit the clinical applicability of SD. As cardiology increasingly intersects with artificial intelligence and digital health technologies, ensuring rigorous methodological standards, transparent validation, and clear governance mechanisms is essential to harness SD responsibly. This Viewpoint highlights the opportunities and blind spots associated with SD and virtual patients in cardiology and underscores the need for harmonized regulatory guidance and ethical safeguards to support their meaningful integration into research and clinical practice.
Background: Background: An increasing amount of TCM clinical data can be collected by software and equipment, forming diversified TCM data, which should typically be collected alongside clinical work....
Background: Background: An increasing amount of TCM clinical data can be collected by software and equipment, forming diversified TCM data, which should typically be collected alongside clinical work. TCM diagnosis and treatment data collection is conducted concurrently with clinical work, typically. However, with the limited time, space, and human resources available in clinical work, collecting diversified TCM Data is difficult, which may affect the quality of the collected data. Objective: Objective: To develop recommendations for optimizing diversified traditional Chinese medicine (TCM) data collection. Methods: Method: A working group comprising 12 members was established. Based on previous survey findings regarding the burden of clinical data collection, the group developed a preliminary list of recommendations for optimizing diversified TCM data collection. A Delphi survey was conducted to investigate consensus levels(using a 5-point Likert scale for importance evaluation) on the list items, and open-ended opinions were also surveyed. If experts in the first round propose additions, deletions, or modifications, or if there is a lack of consensus on certain items, a next round of surveys will be conducted to obtain the experts' agreement rate on the related items. Results: Results: A total of 86 experts from China, the United Kingdom, and Singapore completed two rounds of surveys. Following the first Delphi survey, all items achieved agreement scores above 4, with coefficients of variation(CV) below 0.2. The working group revised 12 items based on open-ended opinions and resubmitted them for agreement assessment. All revised items achieved agreement rates of over 95%. Following the two-round survey process, the final version of the recommendations comprises 5 primary domains, 11 sub-domains, and 25 items. Conclusions: Conclusion: This study formulated recommendations for optimizing diversified TCM data collection. It is hoped that these recommendations will help clinical data collectors consider data collection in advance during the design phase
Background: Acute respiratory infections caused by influenza, respiratory syncytial virus (RSV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) remain a major public health challenge...
Background: Acute respiratory infections caused by influenza, respiratory syncytial virus (RSV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) remain a major public health challenge in Europe. Although surveillance systems for these pathogens are well established, the past two decades have seen a rapid diversification of data streams supporting surveillance and research. This expanding and increasingly complex data landscape, combined with fragmentation across institutions, sectors, and countries, may limit timely evidence synthesis and effective public health decision-making. Objective: This scoping review aimed to identify and characterize data sources used for surveillance and research on influenza, RSV, and SARS-CoV-2 across 12 European countries over the past 20 years, and to examine their evolution over time, their alignment with research objectives, and geographic variation in data availability and use. Methods: We conducted a scoping review using an objective-driven analytical framework. Empirical reports published between January 2005 and September 2025 were identified in Medline, Web of Science, and Embase. Eligible reports focused on influenza, RSV, or SARS-CoV-2 and included data from Western (France, Belgium, Germany, Netherlands), Northern (Denmark, England, Finland, Sweden), Southern (Italy, Spain), and Eastern Europe (Poland, Romania). Clinical and interventional studies were excluded. Reports were classified according to four research objectives: epidemiological monitoring; evaluation of interventions; assessment of disease burden and health outcomes; and analyses of population adherence and trust toward public health measures. Data sources were grouped into nine categories, including surveillance systems, electronic health records (EHRs), registries, claims, surveys, digital, environmental, and integrated datasets. Results: A total of 2,564 empirical reports were included. Over time, respiratory virus research relied on an increasingly diverse set of data streams. While surveillance systems remained central, particularly for epidemiological monitoring, their relative dominance declined. From 2020 onward, there was a marked expansion in the use of EHRs, registries, claims data, digital sources, and linked or integrated datasets, alongside increased use of open-access data. Data source use varied by research objective: surveillance data predominated in monitoring and intervention evaluation; EHRs in studies of risk factors and treatment effectiveness; surveys in seroprevalence and public trust analyses; and claims data in assessments of economic burden. Substantial geographic disparities were observed. Northern European countries more frequently used linked and multi-source datasets, whereas Western and Southern Europe relied more often on open-access or single-source data. Conclusions: Respiratory virus surveillance and research in Europe have expanded and diversified substantially over the past two decades, particularly after the Coronavirus disease 2019 (COVID-19) pandemic. However, access to advanced and integrated data streams remains uneven across countries. Strengthening preparedness for future respiratory virus threats will require sustained investment in interoperable data infrastructures, improved data governance, and the responsible use of artificial intelligence to integrate heterogeneous data sources.
Background: Task-oriented rehabilitation supported by exoskeletons has the potential to increase therapy intensity, personalization, and accessibility. However, to achieve fully automatic treatment, r...
Background: Task-oriented rehabilitation supported by exoskeletons has the potential to increase therapy intensity, personalization, and accessibility. However, to achieve fully automatic treatment, robotized systems need to analyze therapy in a more complex way than only based on reference trajectories following. Objective: This study investigates the effects of an intelligent, context-aware control algorithm for an upper-limb rehabilitation exoskeleton on patients’ musculoskeletal engagement, compared with constant-admittance robot-assisted therapy and conventional physiotherapist-guided treatment. Methods: A single-session experimental study was conducted with 34 adult participants performing six activities of daily living under three therapy modes: robot-assisted therapy with constant admittance, robot-assisted therapy with an intelligent assist-as-needed algorithm, and physiotherapist-guided therapy. Muscle activity was assessed using surface electromyography of eight upper-limb muscle groups, while joint kinematics were recorded using inertial measurement units. Metrics included EMG power, muscle activation time, joint range of motion, and burst duration similarity indices. Statistical comparisons were performed using the T-test and the Mann-Whitney U-test depending on data normality. Results: Results indicate that the intelligent control strategy engages the musculoskeletal system at least as effectively as constant-admittance control across all exercises. At the same time, more motion control is given to the patient, which is preferable for neuroplasticity training. Compared with physiotherapist-guided therapy, robot-assisted treatment with intelligent control elicited significantly higher and more consistent muscular engagement. Intelligent assistance also modified joint-level motion patterns by reducing compensatory movements, particularly in shoulder–elbow coupling, while maintaining functional task execution. Muscle activation timing patterns during intelligent robot-assisted therapy were more consistent with robotic control than with manual therapy, reflecting altered movement strategies. Conclusions: These findings demonstrate that context-aware, intelligent control in rehabilitation exoskeletons can promote active patient participation, reduce compensatory behaviors, and maintain physiologically meaningful muscle engagement. The proposed approach exceeds the results of recent similar studies, being a promising step toward effective, minimally supervised, task-oriented rehabilitation. Clinical Trial: The experiments were carried out under the KB/132/2024 approval of the Bioethical Committee of the Medical University of Warsaw (https://komisja-bioetyczna.wum.edu.pl/). Written informed consent was obtained from all of the subjects involved in this study.
Background: Long-term body weight is regulated by the balance between energy intake and energy expenditure (EE). Although weight stability requires energy balance, achieving and maintaining such balan...
Background: Long-term body weight is regulated by the balance between energy intake and energy expenditure (EE). Although weight stability requires energy balance, achieving and maintaining such balance in everyday life is challenging. Weight loss occurs when EE consistently exceeds energy intake, whereas a sustained positive energy balance promotes weight gain which may lead to obesity. Whole-room indirect calorimetry enables precise 24-h assessment of total EE and its components. Achieving energy balance within a whole-room indirect calorimeter (WRIC) represents a substantial challenge and depends critically on stringent clinical standardization as well as robust technical performance to ensure accurate estimation of energy requirements. Objective: To achieve energy balance within a WRIC and to characterize the technical performance of two newly installed WRIC systems. Methods: Healthy subjects, aged 18 to 65 years with a body mass index of 18.5 to <40 kg/m² are eligible to participate in the study. Resting EE is measured over 30 minutes and combined with the Mifflin–St. Jeor equation to calculate a personalized weight-maintaining diet (WMTD). Participants consume this WMTD for 3 days in free-living conditions before each 24-hour stay in the WRIC. Before and during WRIC stays, participants are instructed to maintain a low physical activity level (PAL≈1.4; PAL defined as 24-h EE/resting EE). Standardized meals (breakfast 8 AM, lunch 1 PM, dinner 6 PM) are provided inside the WRIC. For the first two WRIC stays, biological validation of the system is performed by repeating EE measurements under identical conditions, that is during these stays, the caloric content of the diet matches the pre-calculated WMTD adjusted for reduced physical activity within the WRIC. For a third WRIC stay, following another 3-day WMTD run-in, the caloric content of the diet is matched to each participant’s average 24-h EE from the two preceding stays and energy balance is calculated. Hereupon, two additional WRIC stays are conducted after another 3-day WMTD run-in and participants are instructed to achieve a higher physical activity level (PAL≈1.7) using cycle ergometry. During the first stay within the WRIC with PAL≈1.7, caloric content of the diet equals the WMTD adjusted for PAL≈1.7. For the following 24-h EE assessment with PAL≈1.7, diet is adjusted such that its caloric content equals the previously measured 24-h EE under increased physical activity and energy balance is reassessed. The day after the last 24-h EE assessments with PAL≈1.4 and PAL≈1.7, respectively, ad libitum energy intake is measured using a buffet to relate individual EE with energy intake. Body weight is monitored throughout the study. Results: The trial commenced in August 2025. At the time of manuscript submission, six participants have been enrolled. Based on prior data, a total of 34 participants is required to evaluate the improvement in mean energy balance by 100 kcal with a power >0.80, assuming a standard deviation of 200 kcal. The final analyses will include energy balance, changes in body weight, components of EE, ad libitum energy intake, and circulating hormones involved in appetite regulation and satiety. Conclusions: This trial evaluates whether energy balance can be achieved during repeated stays in a WRIC and provides a detailed assessment of the performance of two newly installed WRIC systems.
Background: Background: Artificial Intelligence (AI) is rapidly transforming healthcare by reshaping clinical decision-making, service organization, and professional competencies. In physiotherapy, AI...
Background: Background: Artificial Intelligence (AI) is rapidly transforming healthcare by reshaping clinical decision-making, service organization, and professional competencies. In physiotherapy, AI offers opportunities to enhance efficiency, personalization, and interdisciplinary collaboration, while also posing ethical, educational, and governance challenges. Objective: Objective: This study aimed to examine physiotherapists’ perceptions of AI implementation across professional domains, identifying strengths, weaknesses, opportunities, and threats (SWOT), and to assess the influence of prior AI experience and knowledge levels on these perceptions. Methods: Methods: An observational, cross-sectional survey was conducted using a 26-item online questionnaire structured within a SWOT framework. The survey included demographic data, 20 Likert-scale items, and two open-ended questions. Composite indices for Opportunities and Concerns were calculated, internal consistency was assessed with Cronbach’s α, and non-parametric tests with false discovery rate adjustment were applied. Qualitative responses were thematically analyzed. Results: Results: Fifty physiotherapists participated, most reporting basic or no AI knowledge, while 52% had prior AI experience. The Opportunities Index showed excellent internal consistency (α = 0.93) and the Concerns Index acceptable consistency (α = 0.77). Overall, Concerns outweighed Opportunities (3.68 vs. 3.31). Main concerns included reduced human contact, insufficient training, and data privacy, while key opportunities involved administrative automation, training in emerging technologies, and interdisciplinary collaboration. Prior AI use was associated with greater concern about data privacy. Conclusions: Conclusions: Physiotherapists view AI as a promising yet challenging innovation. Strengthening digital literacy, ethical oversight, and participatory governance is essential to ensure AI adoption aligns with human-centered physiotherapy care.
Background: Cardiovascular disease remains the leading global cause of mortality, driven by interrelated behavioral, biological, and psychosocial risk factors despite the availability of effective pre...
Background: Cardiovascular disease remains the leading global cause of mortality, driven by interrelated behavioral, biological, and psychosocial risk factors despite the availability of effective prevention and treatment strategies. Persistent policy inertia, systemic fragmentation, and adverse social and commercial determinants have limited national responses. Addressing these gaps necessitates place-based, systems-oriented approaches that mobilize local assets, engage multi-sector stakeholders, and incorporate adaptive evaluation. The Springfield Healthy Hearts initiative exemplifies such an approach by positioning Greater Springfield as a “living laboratory” for coordinated cardiovascular health action through a comprehensive data framework, providing a replicable model for other communities. Objective: This protocol outlines the Springfield Healthy Hearts Data Framework; a multi-component system for dynamically guiding, implementing and evaluating coordinated action for heart health. Methods: The Data Framework was developed through a structured co-design process involving community members, expert researchers, health professionals, and representatives from local implementation partners. The framework comprises four integrated components: (1) Project Evaluations, applying pragmatic frameworks to assess coordinated action projects; (2) Community Evaluation, a repeated cross-sectional evaluation of Springfield residents, workers and regular visitors to capture individual-level behavioural, biological and psychosocial CVD risk factors, as well as engagement with coordinated action projects; and (3) City Evaluation, ongoing monitoring of suburb- and city-level indicators across four domains: sociodemographic characteristics, built environment, food and commercial environment and health services. (4) Data Synthesis, to utilise data across all levels to inform a continuous learning system.
Project evaluations will use both quantitative and qualitative methods, including realist evaluation where appropriate. Community evaluation will be analysed using descriptive statistics, mixed-effects models and subgroup analyses, with missing data addressed via multiple imputation. City-level data will be analysed descriptively and dynamically to detect temporal trends and contextual changes. Results: As of February 2026, we have held two Data Framework co-design workshops with 15 community members. Their input, priorities and needs have informed our framework’s components. Conclusions: The Springfield Healthy Hearts Data Framework is a replicable model for other communities aiming to implement city-wide, coordinated approaches to heart health action. Findings will be disseminated through peer-reviewed publications, community reports, interactive dashboards, and policy briefs.
Background: Background: The digital transformation of healthcare is reshaping how breast cancer patients access and use information, yet little is known about how their digital information behaviours...
Background: Background: The digital transformation of healthcare is reshaping how breast cancer patients access and use information, yet little is known about how their digital information behaviours evolve across the illness trajectory. Objective: Objective: To explore stage-specific digital health information behaviours and the cognitive, emotional and social factors shaping decision-making. Methods: Design: Descriptive qualitative study informed by Uncertainty Management Theory.
Setting: A tertiary hospital in Shanghai, China.
Participants: Fifteen women with breast cancer.
Methods: Semi-structured, face-to-face interviews were conducted with purposive sampling across diagnostic, treatment and recovery phases; data were analysed using directed and inductive content analysis within a UMT framework. Results: Results: Five themes emerged, highlighting shifts from passive reception to active screening, complementary use of search engines, social media and AI tools, and the role of trust, emotion and social context in information acceptance or rejection. Conclusions: Conclusions: Digital health information behaviours are dynamic and stage-specific, suggesting phase-tailored, nurse-led digital support.
Background: Digital physical exercise interventions offer a scalable solution to combat age-related cognitive decline. While various modalities exist, their comparative effectiveness across different...
Background: Digital physical exercise interventions offer a scalable solution to combat age-related cognitive decline. While various modalities exist, their comparative effectiveness across different cognitive domains remains unclear, necessitating a systematic evaluation to guide clinical practice. Objective: This study aims to evaluate and rank the comparative effectiveness of different digital physical exercise interventions—including immersive VR (IVR_E), non-immersive exergames (NI_ExG), remote exercise (RE), and VR combined with cognitive training (VR_EC)—on global cognition, executive function, and memory function in older adults. Methods: We conducted a systematic review and Bayesian network meta-analysis of randomized controlled trials (RCTs) published between January 1, 2010, and April 30, 2025. Data sources included PubMed, Embase, and Web of Science. Eligible studies involved older adults (aged ≥60 years) and compared digital physical exercise interventions against routine interventions (RI) or non-intervention (NI). The primary outcomes were global cognition, executive function, and memory function. We estimated standardized mean differences (SMDs) and ranked interventions using the surface under the cumulative ranking curve (SUCRA). Results: A total of 41 RCTs involving 2919 participants were included. For global cognition, IVR_E emerged as the most effective intervention (SUCRA=96.6%), followed by NI_ExG (SUCRA=76.4%); both modalities were significantly superior to RI. Regarding executive function, RE (SUCRA=73.8%) and NI_ExG (SUCRA=69.3%) ranked highest. Notably, NI_ExG was the only intervention to demonstrate a statistically significant improvement over RI in this domain, while IVR_E showed no significant advantage. For memory function, IVR_E was the dominant intervention (SUCRA=82.8%) and was the only modality significantly more effective than RI. Subgroup analyses further indicated that a cumulative training dose exceeding 1000 minutes is critical for observing significant improvements in memory function. Conclusions: Digital physical exercise interventions significantly enhance cognitive function in older adults, but their optimal application is domain-specific. IVR_E appears most effective for global cognition and memory, likely due to high immersion and standardization. Conversely, NI_ExG and RE are preferable for enhancing executive function, potentially offering more scalable alternatives for home-based care. Future interventions targeting memory improvement should ensure sufficient cumulative training duration. Clinical Trial: PROSPERO CRD42025103014
Background: Assistive technologies can support independent living among older adults, but uptake is often constrained by attitudes and confidence. The COVID‑19 lockdowns accelerated technology use a...
Background: Assistive technologies can support independent living among older adults, but uptake is often constrained by attitudes and confidence. The COVID‑19 lockdowns accelerated technology use across all age groups, offering a natural experiment to examine changes in adoption. Objective: This study aimed to examine changing patterns of technology use in older adults, to provide insight as to how service providers can support the use of technology to support independence and well-being. Methods: Two cross‑sectional surveys were conducted in UK retirement villages, one before the pandemic (2020) and one after lockdowns (2023), to assess technology attitudes and use. Semi‑structured interviews with eight participants in a technology trial scheme provided qualitative insights. Results: Technology adoption increased significantly between 2020 and 2023, with older adults reporting greater confidence and comfort in digital use. Self‑education and informal support from family or friends were the most common pathways to adoption. Age‑related differences in confidence observed in 2020 were no longer apparent in 2023, although gender disparities persisted. Interviewees emphasized usefulness and accessibility as key drivers of sustained engagement. Findings demonstrate that the pandemic catalyzed lasting increases in technology adoption among older adults, including increased confidence and ownership. Conclusions: Findings demonstrate that the pandemic catalyzed lasting increases in technology adoption among older adults, including increased confidence and ownership. These results provide evidence for housing providers and policymakers to embed accessible technologies and targeted support in retirement communities, thereby enhancing independence and quality of life in later life.
Social media influencer marketing is a digital advertisement strategy that is growing in popularity. Its use has been documented in consumer purchasing behavior but is yet to be described for clinical...
Social media influencer marketing is a digital advertisement strategy that is growing in popularity. Its use has been documented in consumer purchasing behavior but is yet to be described for clinical trial recruitment. In this tutorial, we describe the steps we followed to develop and deploy a social media influencer advertisement for the recruitment of participants into the Groceries for Residents of Southeastern USA to Stop Hypertension (GoFreshSE) trial. We also provide a preparation framework for other studies who would like to use this modality for their own clinical trial recruitment. We used Cameo Business to identify potentially relevant influencers to hire by selecting influencers who were popular in the 3 geographic areas from which GoFreshSE is recruiting. We narrowed down the list of possible influencers by selecting those with ≥100,000 followers on their respective social media platforms (for a wide reach) and charged a cost of ≤$3,000/video. We ultimately selected a former football coach, who provided a high-quality video of him reading an institutional review board-approved script 4 days later. We utilized open source, commercially available tools to edit the video and deployed the 44-second-long video on Facebook and Instagram using Meta’s Advertising platform. Social media influencer marketing through the Cameo Business platform is a rapid mechanism to develop clinical trial influencer recruitment videos.
Background: Sample pooling is an essential strategy for optimizing polymerase chain reaction (PCR) resources during infectious disease outbreaks, especially in the beginning. While high-dimensional hy...
Background: Sample pooling is an essential strategy for optimizing polymerase chain reaction (PCR) resources during infectious disease outbreaks, especially in the beginning. While high-dimensional hypercube pooling strategies—such as those recently highlighted in Nature—offer superior efficiency in low-prevalence settings, they are difficult to implement in practice. The human cognitive and physical limitation to three-dimensional environments makes manual execution of four- or five-dimensional sample arrays prone to significant operational error. Objective: To develop and evaluate a novel "Ternary Card Hypercube Pooling" strategy that simplifies the implementation of multidimensional pooling, making it accessible for laboratory personnel without compromising mathematical efficiency. Methods: We integrated logic from ternary card games (based on sets of three attributes) to create a visual and physical framework for hypercube pooling. This method maps high-dimensional coordinates onto a simplified "card" system, allowing laboratory technicians to organize and track samples using intuitive pattern recognition rather than complex multidimensional mapping. Results: The Ternary Card method successfully translates the efficiency of hypercube pooling into a user-friendly workflow. It maintains the high performance of traditional hypercubic algorithms—allowing for rapid identification of positive samples in a single step in the majority of cases—while significantly reducing the risk of manual pipetting errors and the need for specialized automated equipment. Conclusions: The Ternary Card Hypercube Pooling strategy bridges the gap between theoretical mathematical efficiency and practical laboratory application. By reducing the complexity of sample handling, this method provides a scalable solution for increasing PCR throughput in response to future pandemics, particularly in resource-limited settings. Clinical Trial: NA
Background: Based on its nature, cyberbullying is expected to be frequent and prevalent because of the continuity in technological advancement over time. This change results in more victims and is cha...
Background: Based on its nature, cyberbullying is expected to be frequent and prevalent because of the continuity in technological advancement over time. This change results in more victims and is challenging to detect, negatively impacting their health. Given that adolescents in Jordan face a high rate of cyberbullying, it is essential to understand how this experience affects them. Objective: Objective: the current study sought to explore individual experiences and perspectives on cyberbullying victimization among adolescents to intervene and reduce future incidences of cyberbullying. Methods: Method: The analysis was based on a cross-sectional study investigating cyberbullying and its mental health consequences among 400 students between the ages of 14 -17 from public schools in central and northern Jordan. Those respondents were asked to answer three open-ended questions describing their experiences with cyberbullying if they experienced cyber victimization as either victims or bully-victims, resulting in 240 responses. Thematic analysis was then used to interpret patterns of shared meaning across participants' narratives related to cyberbullying experiences in cyberspace. Results: Results: three key themes and several subthemes emerged from this study: (a) effects of cyberbullying, (b) challenges in overcoming its consequences, and (c) elements influencing the severity of cyberbullying experiences. Conclusions: Conclusion: the findings offer valuable insights for creating safer online environments and reducing cyberbullying’s psychological and social harms through appropriate interventions.
This study examined the feasibility and acceptability of implementing a low-cost indoor air quality sensor among a socioeconomically diverse population of parents of children with asthma. Interview an...
This study examined the feasibility and acceptability of implementing a low-cost indoor air quality sensor among a socioeconomically diverse population of parents of children with asthma. Interview and survey data indicated that the use of this tool was both feasible and acceptable, while highlighting affordability as an important consideration for the future deployment of these digital tools.
Background: Recent advances in machine learning enable fully automated pattern recognition and representation learning directly from biomedical signals, offering an alternative to handcrafted, task-sp...
Background: Recent advances in machine learning enable fully automated pattern recognition and representation learning directly from biomedical signals, offering an alternative to handcrafted, task-specific ECG algorithms. However, demonstrating that such approaches can achieve clinically reliable performance remains challenging due to the limited availability of representative, expert-annotated ECG datasets. In the context of shockable rhythm detection, research is largely constrained to a small number of publicly available databases with limited cohort sizes and annotation inconsistencies. Shockable rhythm detection during sudden cardiac arrest represents a clinically critical and well-defined use case for evaluating the robustness of automated ECG representation learning. Objective: This study aimed to assess whether a deep learning framework with fully automated ECG feature extraction can accurately and reliably classify cardiac conditions, using shockable rhythm detection as an example application, and to evaluate the impact of expert reannotation on model performance. Methods: Four public arrhythmia databases (MIT-BIH Arrhythmia Database, Creighton Ventricular Tachycardia Database, MIT-BIH Ventricular Arrhythmia Database, and American Heart Association Database) were used. ECG waveforms were transformed into spectrograms and analyzed using residual neural networks (ResNets). A balanced dataset of 60,340 augmented 3-second segments was generated to optimize model architecture. The final model (ResNet32) derived shock decisions from blocks of three consecutive 3-second segments, corresponding to a 9-second evaluation window. Performance was assessed using leave-one-subject-out cross-validation on the original, non-augmented dataset. All misclassified blocks were independently reviewed and reannotated by expert cardiologists. Results: Across 19,802 evaluated blocks (2,495 shockable), the model achieved an accuracy of 99.68%, sensitivity of 99.63%, and specificity of 99.69%. Expert review revealed that 73% of misclassified blocks differed from the original database annotations. After incorporating expert annotations, performance improved to 99.92% accuracy, 99.76% sensitivity, and 99.87% specificity. Conclusions: This study demonstrates that a deep learning framework with fully automated ECG representation learning can achieve highly accurate classification of shockable rhythms. The algorithm design, including spectrogram-based representation learning and block-based decision-making, promotes clinical robustness by incorporating temporal context, reducing sensitivity to transient rhythms, and mitigating the impact of annotation inconsistencies while aligning with clinical assessment practices. Beyond shockable rhythm detection, the proposed approach has the potential to support automated analysis of additional cardiac conditions, such as QT prolongation and electrolyte imbalances, and to contribute to the generation of standardized, clinically representative, and expert-annotated ECG databases. Such capabilities may facilitate more reliable benchmarking and support future translation of automated ECG analysis into real-world clinical and mobile applications.
Background: Although effective, current CAR-T production methods — centralized, manual, and complex — are cost-intensive, time-consuming, and prone to variability. AIDPATH proposes a decentralized...
Background: Although effective, current CAR-T production methods — centralized, manual, and complex — are cost-intensive, time-consuming, and prone to variability. AIDPATH proposes a decentralized, automated alternative that integrates patient-specific data, optimizes resource use, and potentially improves cell viability, manufacturing efficiency, and patient outcomes. Objective: The aim of this study was to compare AIDPATH-produced CAR-T therapy to both Cilta-Cel and standard of care (SoC) for triple-class refractory multiple myeloma (MM) patients, over a 40-year time horizon in Germany from the hospital perspective. Methods: A partitioned survival model reflecting 3 health states (progression-free disease, progressed disease, and death) was used. The analysis used clinical trial data for Cilta-Cel, real-world data for SoC, and estimated parameters for AIDPATH, due to the developmental status of the platform. The primary outcome was the incremental cost effectiveness ratio, secondary outcomes included sensitivity and scenario analyses. Results: AIDPATH was dominant compared to both Cilta-Cel and SoC. Most costs for CAR-T therapies were driven by acquisition and adverse events. Sensitivity analyses showed the results were most influenced by discount rates and assumptions about progression-free survival. Scenario analyses, including reduced adverse events and shorter vein-to-vein time for AIDPATH, further supported its cost-effectiveness. Conclusions: This is the first study to assess the cost-effectiveness of CAR-T product generated with AI support in Germany from the hospital perspective. AIDPATH was found to be a cost-effective alternative to both Cilta-Cel and SoC, making it a promising option for future implementation. While further data are needed, this study provides valuable guidance for health care stakeholders, reimbursement discussions, and future research.
Background: Auditory discrimination training is widely used to supplement aural habilitation and rehabilitation in individuals with hearing or auditory challenges. Recently, gamification has been intr...
Background: Auditory discrimination training is widely used to supplement aural habilitation and rehabilitation in individuals with hearing or auditory challenges. Recently, gamification has been introduced to enhance attention and engagement during training Objective: In this study, we developed and compared two pure-tone auditory discrimination training systems: a game-based system with dual-task gamified activities and a non-game-based control system with identical auditory tasks but without gamified elements. Methods: A three-stage process (design, implementation, and evaluation) yielded beta versions of both systems. In the evaluation stage, eleven young adults (18–30 years) completed usability, user experience, and engagement questionnaires after using each system. Behavioral performance was assessed through mean response time, proportion of correct responses, Weber fraction, the Inverse Efficiency Score, and a novel Auditory Discrimination Performance Index. Results: The game-based system produced significantly higher scores in focused attention, aesthetic appeal, reward, attractiveness, stimulation, and novelty questionnaires’ perceived domains while no significant differences were found in most of auditory discrimination performance metrics. Conclusions: These findings suggest that gamification can substantially improve user experience and engagement without degrading short-term discrimination performance. Longitudinal studies are needed to determine whether these experiential advantages translate into long-term auditory training benefits and how sound features may improve performance in other auditory tasks.
Background: Co-design ensures cultural safety of health interventions for Aboriginal and/or Torres Strait Islander communities. However, an intervention developed with one Indigenous community may not...
Background: Co-design ensures cultural safety of health interventions for Aboriginal and/or Torres Strait Islander communities. However, an intervention developed with one Indigenous community may not be suitable for another geographically and culturally distinct community. Objective: This study aimed to culturally adapt content and features of a mobile health (mHealth) application co-created by communities in one Australian state to better meet the needs of mothers and caregivers of Aboriginal and/or Torres Strait Islander children aged 0-18 years and health professionals in another state. Methods: The study followed the stages of the cultural adaptation stepwise model by Barrera et al. Mothers/caregivers of Aboriginal and/or Torres Strait Islander children aged 0-5 years and their health professionals were recruited from multiple community sites. Data were collected through culturally appropriate yarning circles or interviews facilitated by Aboriginal research staff. Qualitative data were transcribed and inductively analysed to generate themes. The feedback was translated into practical changes that were applied to the mHealth application. Results: Data saturation was achieved after yarning circles with 21 women and seven health professionals. Nine themes were generated from mothers/caregivers’ data: 1) cultural relevance and sensitivity, 2) linking with culturally appropriate services, 3) Use of lay language and more audio-visual content , 4) concerns with mobile data usage, 5) Perceptions about the current content of the Jarjums app, 6) raising children, 7) safety, 8) health and wellbeing of mothers and caregivers, and 9) coordinating health care. Four themes were generated from data collected from health professionals: 1) favourable features of the app, 2) potential barriers to the use of the app, 3) healthcare system access issues, and 4) recommended modifications.
Based on feedback received, the mHealth application changes included the addition of information on healthy relationships and raising children, more visual content, and localized service directories for different categories of care and support. Conclusions: A co-designed, culturally sensitive mHealth application is likely to support Aboriginal and/or Torres Strait Islander families facing health disparities due to disruption of Indigenous culture by a foundation for a potential clinical trial for effectiveness evaluation and wider implementation.
Background: Lung-protective ventilation (LPV) reduces complications of mechanical ventilation, yet adherence in intensive care (ICUs) remains inconsistent. Digital dashboards may support LPV by improv...
Background: Lung-protective ventilation (LPV) reduces complications of mechanical ventilation, yet adherence in intensive care (ICUs) remains inconsistent. Digital dashboards may support LPV by improving situational awareness and supporting protocol adherence. However, adoption of such tools in high-acuity clinical environments depends on a range of cognitive, professional and contextual determinants. The Measurement Instrument for Determinants of Innovations (MIDI) provides a validated framework to systematically assess these factors. Objective: To identify determinants influencing adoption of a newly piloted mechanical ventilation dashboard in the ICU using the MIDI framework. Methods: We conducted a single-center, cross-sectional evaluation among ICU healthcare professionals during a dedicated survey period within a pilot introduction of a mechanical ventilation dashboard at Amsterdam UMC. Participants completed a structured questionnaire consisting of 24 MIDI items adapted to the ICU context rated on a 5-point Likert scale (completely disagree to completely agree), supplemented by open-ended questions on perceived barriers and facilitators to its use. Determinants were classified as facilitators when ≥80% of respondents selected “agree” or “completely agree” and as barriers when ≥20% selected “disagree” or “completely disagree”. Open-ended responses were analyzed using a general inductive thematic approach. Results: A total of seventy-one completed questionnaires were analyzed, including responses from nurses, physicians, intensivists, ventilation specialists, and researchers in mechanical ventilation. Six determinants met criteria for facilitators: outcome expectations; self-efficacy; procedural clarity; low complexity; correctness; and observability. Two determinants met criteria for barriers: relevance for client; and professional obligation. Analysis of open-ended responses highlighted perceived barriers such as additional workload, the need for an extra device, overlap with existing systems, and limited role-specific relevance. Facilitators included improved situational overview, educational value, easier trend monitoring, and increased efficiency. Conclusions: This evaluation identified key determinants influencing adoption of a mechanical ventilation dashboard in ICU. While the dashboard was generally perceived as useful and easy to understand, adoption was shaped by determinants related to workflow integration, role-specific relevance, and professional responsibility. These findings suggest that successful introduction of digital clinical support tools in intensive care requires attention not only to technical design, but also to how such tools align with users’ roles, daily work processes, and shared clinical responsibilities. Systematic assessment of determinants provides actionable insight into adoption of digital decision-support tools in high-acuity care settings. Clinical Trial: Not applicable; this study was not a registered clinical trial.
Background: Occupational stress is a pressing health issue for academic staff, particularly in health sciences faculties where the demands of teaching, research, clinical supervision, and administrati...
Background: Occupational stress is a pressing health issue for academic staff, particularly in health sciences faculties where the demands of teaching, research, clinical supervision, and administrative responsibilities are significant. Extended periods of job-related stress can result in detrimental psychological and physical effects. Despite this, non-pharmacological stress management techniques, such as hydrotherapy, are not commonly employed or extensively studied within South African higher education institutions. Objective: The purpose of this study is to evaluate the level of knowledge and awareness that academic professionals have regarding hydrotherapy as a technique for managing work-related stress. Furthermore, it seeks to explore the changes in specific physiological stress-related variables among the Health Sciences faculty at Durban University of Technology. Methods: The study will adopt a quantitative longitudinal study design with a pre/post evaluation structure. Health Sciences academic professionals who satisfy the study's inclusion criteria will be recruited through purposive sampling. Data will be gathered using structured questionnaires and physiological assessments conducted both before and after the hydrotherapy intervention. Results: The Durban University of Technology Institutional Research Ethics Committee has granted ethical approval for the study protocol. The Faculty of Health Sciences has also provided institutional permission. With institutional approval secured, the recruitment of participants and the preliminary testing of data collection tools are set to begin in March 2026. Data will be analysed using SPSS version 29, employing both descriptive statistics and inferential analyses to evaluate changes in physiological variables before and after the intervention. The findings will be displayed in tables and graphs. Conclusions: This protocol describes a study examining the use of hydrotherapy as an additional method for addressing work-related stress among academic professionals in the Health Sciences. The results are anticipated to enhance evidence-based strategies for occupational wellness and guide the incorporation of non-drug stress management techniques in higher education settings.
Background: FDA-cleared artificial intelligence (AI) triage tools for intracranial hemorrhage (ICH) are increasingly deployed in clinical radiology. In real-world practice, perceived utility may depen...
Background: FDA-cleared artificial intelligence (AI) triage tools for intracranial hemorrhage (ICH) are increasingly deployed in clinical radiology. In real-world practice, perceived utility may depend not only on diagnostic performance but also on workflow friction, false-alarm burden, and calibrated trust when AI outputs conflict with radiologist interpretation. Objective: To characterize radiologists’ perceptions, trust calibration, and self-reported vigilance behaviors when using an FDA-cleared ICH AI triage tool in a national teleradiology network and to evaluate differences by neuroradiology subspecialty training. Methods: We conducted an anonymous cross-sectional survey of radiologists in a national teleradiology practice who had access to an FDA-cleared ICH detection AI overlay during routine noncontrast head CT interpretation. Survey domains included perceived reliability and usefulness, false-alarm burden, workflow integration, medicolegal concerns, and items designed to probe self-reported vigilance behaviors consistent with automation complacency. Responses used a 5-point Likert scale (Strongly agree, Agree, Neutral, Disagree, Strongly disagree). Results are summarized as agreement proportions (“agree”/“strongly agree”). We evaluated subgroup differences between neuroradiologists and non-neuroradiologists using Fisher exact tests. To reduce risk of spurious findings from multiple comparisons, we prespecified a primary endpoint and treated other items as exploratory with false discovery rate (FDR) control using the Benjamini–Hochberg procedure. Optional free-text responses were analyzed qualitatively to identify recurring themes. Results: Sixty-five radiologists responded (23 neuroradiologists; 42 non-neuroradiologists). Only 18.5% (12/65) agreed that false-positive alerts were infrequent enough to be acceptable. Trust was highly conditional: 50.8% (33/65) trusted the AI when it agreed with their interpretation, whereas only 3.1% (2/65) trusted it when it conflicted. The primary endpoint—agreement that false-positive workload outweighed benefits—was endorsed by 33.9% (22/65) overall and was more common among neuroradiologists than non-neuroradiologists (52.2% vs 23.8%; unadjusted P=.029). However, after FDR correction across exploratory items, no subgroup differences remained statistically significant. Self-reported vigilance reduction on AI-negative outputs was uncommon (6.2% overall; 0% neuroradiologists; 9.5% non-neuroradiologists). Free-text feedback emphasized artifact-driven false positives, delayed or inconsistent AI availability, consult burden, and medicolegal concerns. Conclusions: In a national teleradiology environment, radiologists reported substantial false-alarm burden and highly conditional trust when using an FDA-cleared ICH AI triage tool. Self-reported vigilance reduction was uncommon but present in a minority of users. Human factors–oriented optimization—including specificity improvements, earlier availability, better localization, and workflow-aware triage routing—may improve acceptance and perceived utility.
Background: The small intestine is central to nutrient digestion and absorption, while its epithelial barrier and resident gut microbiota maintain intestinal integrity and prevent passage of antigens,...
Background: The small intestine is central to nutrient digestion and absorption, while its epithelial barrier and resident gut microbiota maintain intestinal integrity and prevent passage of antigens, toxins and partially digested nutrients into the circulation. Evidence shows that lifestyle factors (such as sedentary behaviour, ageing, obesity) and diets high in refined carbohydrates and saturated fats disrupt the gut microbiota, impair intestinal barrier function and promote the phenomenon frequently termed “leaky gut”. In turn, enhanced intestinal permeability may allow lipopolysaccharide (LPS) and other luminal antigens to enter the bloodstream, trigger chronic immune activation and low-grade inflammation, raise insulin secretion and ultimately contribute to insulin resistance and elevated blood glucose. This process may precede the onset of prediabetes the intermediate metabolic state before full-blown Type 2 Diabetes Mellitus (T2DM) and thus represents a potentially critical window for prevention. Objective: This systematic review protocol will synthesise the published evidence on the relationship between gut barrier dysfunction, microbiota dysbiosis and progression from normal glucose tolerance through prediabetes to T2DM. Methods: This protocol was developed following the PRISMA-P 2020 reporting guidelines. Literature searches will be conducted across Google Scholar, PubMed, Scopus, and Science direct. Eligible studies will include published prospective observational, case-control, and cross-sectional research involving non-diabetic, prediabetic, and type 2 diabetic populations. The inclusion criteria will encompass all prediabetic participants aged 18 years and older, those who were not previously diagnosed with any small bowel disorders. Patients diagnosed with gestational diabetes, type 1 diabetes and condition that cause disturbances on the intestinal barrier will be excluded in the study. Eligible studies compared groups such as T2DM versus normoglycemic individuals, prediabetes versus normoglycemic individuals, or T2DM versus prediabetes. Only studies that reported the association between leaky gut (biomarkers of the leaky gut such as IFABP and zonulin) and the onset of prediabetes or T2DM will be considered.
The extracted data will be independently reviewed by a second reviewer, and any discrepancies will be addressed and resolved with input from a third reviewer. The risk of bias will be assessed using the Downs and Black checklist. Meta-analysis will be conducted using Review Manager version 5.4 to generate Forest plots, SPSS will generate funnel plots and the overall quality of evidence will be evaluated using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework. Results: This systematic review will utilize publicly available data collected following the publication of this protocol. The protocol aims to guide the identification and analysis of studies investigating the relationship between leaky gut and the onset of prediabetes. Conclusions: The findings derived from this review will also help inform future research to be conducted in Durban, South Africa.
Background: Ecological momentary assessment (EMA) enables real-time, repeated evaluation of participants' emotions, thoughts, and behavioral patterns in natural settings. It effectively mitigates the...
Background: Ecological momentary assessment (EMA) enables real-time, repeated evaluation of participants' emotions, thoughts, and behavioral patterns in natural settings. It effectively mitigates the retrospective bias inherent in traditional surveys and facilitates a longitudinal understanding of health status. However, its feasibility, practicality, and methodological details for monitoring and promoting maternal health remain unclear. Objective: To conduct a scoping review of studies on the application of EMA in maternal health management, providing a reference for future research and further promotion of maternal and infant health. Methods: Using the Joanna Briggs Institute (JBI) scoping review guidelines as the methodological framework, we searched the Web of Science, PubMed, CINAHL, Embase, Cochrane Library, China National Knowledge Infrastructure (CNKI), China Biomedical Literature Database, Wanfang Database, and VIP Database. The search covered publications from the inception of each database to December 2025, and the included studies were subjected to a comprehensive analysis. Results: The search yielded 2,989 publications, of which 14 were ultimately included. The findings were summarized across three dimensions: study design characteristics (publication year, country, and study design features, such as sample size, study population, and outcome measure type); EMA data collection methods (EMA schedule characteristics, such as monitoring cycle, duration, and data sampling methods, such as fixed-time, random-time, or event-based sampling); and EMA response-related outcomes (participation rate and response rate). Conclusions: The EMA effectively mitigates the recall bias inherent in traditional assessment methods, offering novel approaches to enhance the quality of maternal health management. This enables longitudinal monitoring of maternal experiences in natural settings, facilitating the early identification of abnormal physiological, psychological, and behavioral issues during pregnancy and postpartum. This allows timely intervention to safeguard maternal and infant health. Future research should refine EMA study designs and implementation formats to fully leverage their potential in promoting maternal health and personalized interventions for maternal-infant wellness. Clinical Trial: Trial Registration: OSF Registries 10.17605/OSF.IO/GMFKZ
Background: Digital interventions for childhood obesity prevention have potential to support healthy lifestyle behaviors, but real-world effectiveness is often limited by low engagement and poor align...
Background: Digital interventions for childhood obesity prevention have potential to support healthy lifestyle behaviors, but real-world effectiveness is often limited by low engagement and poor alignment with children’s developmental needs and family contexts. Co-creation with end users and clinical stakeholders can generate actionable requirements to inform the design of age-tailored, acceptable, and scalable mobile health (mHealth) solutions. Objective: This study aimed to (1) elicit user requirements for a pediatric mHealth app to support healthy lifestyle behaviors relevant to overweight/obesity prevention and (2) examine how requirements differ across child age groups and stakeholder types (children/adolescents, parents, and health professionals). Methods: A total of 113 children and adolescents, 47 parents and 13 health experts participated in co-creation workshops as part of the BIO-STREAMS project. Children in each age group participated in two 90-minute workshops that were conducted between November 2024 and March 2025 across five European countries. Participants responded to questions regarding healthy lifestyle behaviors and were subsequently invited to articulate their vision for a potential health application. Two researchers analyzed the data using a thematic analysis approach. Results: Stakeholders described mHealth requirements that clustered into distinct but complementary domains. Children emphasized (1) practical health guidance (e.g., food and activity ideas), (2) personalization and goal support, (3) engaging and interactive features (e.g., gamification and feedback), and (4) accessible learning resources. There was clear age differences: younger children preferred concrete, routine-based guidance, while older adolescents more often referenced balanced lifestyle concepts, mindful decision-making, and mental well-being–related support. Parents prioritized (1) guidance and coaching features, (2) tracking that is flexible and not overly burdensome, (3) usability and comfort considerations (including oversight preferences), and (4) credible information sources and functionality expectations for family use. Health professionals highlighted (1) clinically meaningful monitoring and communication, (2) stigma-sensitive and developmentally appropriate feedback, and (3) considerations for managing and governing digital health platforms used in pediatric obesity prevention. Conclusions: The presented co-creation with children, parents, and clinicians produced actionable requirements for designing an age-tailored pediatric mHealth intervention for obesity prevention and to support relevant healthy lifestyle behaviors. Findings support a multi-actor approach (child-, parent-, and health expert-relevant views), strong personalization, and engagement-focused interaction design, while addressing usability, burden, and appropriate oversight to facilitate adoption in real-world family and clinical contexts. Clinical Trial: The study was registered at ISRCTN (ISRCTN44876661, registered on 23/04/2025)
Background: Type 2 diabetes (T2D) and high blood pressure (HBP) are major public health challenges worldwide, leading to serious complications, disability, and mortality. In Tunisia, the contribution...
Background: Type 2 diabetes (T2D) and high blood pressure (HBP) are major public health challenges worldwide, leading to serious complications, disability, and mortality. In Tunisia, the contribution of civil society organizations (CSOs) to the prevention and management of these non-communicable diseases (NCDs) remains limited. Objective: This study aimed to assess the epidemiological situation of T2D and HBP in North-East Tunisia and to examine the added value of CSO involvement in research and advocacy. Methods: A community-based participatory research approach was implemented, coordinated by the Science Shop at the Institut Pasteur de Tunis, in partnership with the Regional Association of Diabetics in Zaghouan. Epidemiological data were collected from 420 volunteer participants to estimate the prevalence of T2D and HBP in northeastern Tunisia (Zaghouan region) and to identify associated risk factors. In parallel, members of civil society organizations (CSOs) actively contributed to identifying community priorities, awareness gaps, and barriers to effective disease management Results: Findings revealed a concerning increase in the prevalence of T2D and HBP in the region, emphasizing the urgent need for targeted interventions. The engagement of CSOs strengthened the relevance and impact of research, improved community participation, and facilitated dialogue with policymakers. Conclusions: In conclusion, this study underscores the pivotal role of CSO–research partnerships in bridging science and society, promoting evidence-based health actions, and enhancing policy responses to NCDs in Tunisia
Background: Rapid digital transformation is reshaping health care worldwide. To ensure that digital technologies improve care quality and support national priorities, health systems need systematic di...
Background: Rapid digital transformation is reshaping health care worldwide. To ensure that digital technologies improve care quality and support national priorities, health systems need systematic digital health strategic planning rather than technology‑first or vendor‑driven decisions. Saudi Arabia’s Vision 2030 calls for the localization of health innovation and digital capability. King Saud University Medical City (KSUMC) is a large academic medical centre seeking to institutionalize innovation and digital health capabilities. Objective: This study aimed to develop a strategic framework for a digital health innovation hub at KSUMC. The framework aligns with Vision 2030’s localization goals and draws on global digital health strategic planning guidance to support innovation, knowledge transfer and intellectual‑property (IP) commercialization. Methods: A qualitative case study was undertaken from April to June 2025 using semi‑structured interviews with 14 purposively sampled stakeholders from clinical, administrative and innovation roles at KSUMC. Data were coded thematically using an interpretivist approach informed by diffusion of innovation theory, the Context‑Actor‑Mechanism‑Outcome (CAMO) lens and systems thinking. Thematic findings were interpreted in light of global digital health strategic planning frameworks, including the World Health Organization (WHO) Global Digital Health Strategy and the Centers for Disease Control and Prevention (CDC) Global Digital Health Strategy. Results: Five interrelated themes influenced digital health innovation: (1) Leadership and culture, senior leadership supported innovation but bureaucratic culture slowed experimentation; (2) Resources and operations, high clinical workload, fragmented information systems and insufficient funding constrained digital health initiatives; (3) Knowledge exchange, informal networks existed, yet there were few structured mechanisms for knowledge transfer and IP management; (4) Incentives and capacity, staff were motivated by recognition and professional development but lacked protected time and incentives to engage in digital innovation; (5) External policy environment, Vision 2030 provided momentum for digital health, but reliance on external consultants risked undermining internal capability. These themes informed a Strategic Planning Framework that emphasizes leadership‑driven culture change, cross‑sector partnerships, systematic knowledge‑transfer mechanisms, ethical IP policy, and sustainable funding. Conclusions: Digital health transformation requires more than the acquisition of technology; it demands systematic strategic planning, continuous stakeholder engagement and alignment with national policy. The proposed framework for KSUMC prioritizes leadership, governance, capacity building, knowledge transfer and IP management. By integrating WHO guidance on national digital health strategies, such as multistakeholder leadership, adaptable infrastructure, and robust governance, with agile planning methods, the framework supports both periodic and continuous strategic planning. This case highlights the need for academic medical centres in emerging economies to adopt evidence‑based strategic planning to harness digital health opportunities and achieve sustainability.
Background: mWorks is a co-designed web-based self-management intervention developed to empower persons with common mental disorders on sick leave during their return-to-work process. However, a lack...
Background: mWorks is a co-designed web-based self-management intervention developed to empower persons with common mental disorders on sick leave during their return-to-work process. However, a lack of knowledge regarding how the delivery and receipt of mWorks occur in practice impedes further progress. It is suggested that evaluations, according to the Medical Research Council framework, provide a format for studying to examine the contextual factors influencing implementation, how mWorks was delivered in practice, and how service users and professionals experienced and responded to the intervention. Objective: To evaluate the process of implementing the mWorks, specifically focusing on assessing the intervention's delivery in relation to the context, implementation process, and impact mechanism. Methods: This case study is limited to a single case study design. The case was bounded by the delivery period of 10-weeks in a primary and specialist mental health service context. During this period, return-to-work professionals (n=2) and service users (n=6) collaborated to initiate mWorks usage. Both qualitative and quantitative methods were used to triangulate multiple data sources. Results: The pandemic and mental health problems posed contextual barriers, particular during recruitment. However, the legitimacy of mWorks facilitated overall implementation. The delivery was performed according to plan with minimal adaptions. All users adhered to the intervention, and dialogue meetings were highly valued. mWorks was used flexibly according to users’ needs, both during sick leave and at work. The potential impacts included a transformative process for users, fostering acceptance, self-esteem, self-compassion, and a sense of control. It also had the potential to prevent mental ill-health, transform negatives into positives, facilitate disclosure of mental health, and support goal setting. The use of quantitative measures for empowerment, engagement, self-efficacy, depression stigma, and quality of life proved feasible and supported the assumptions and direction of results. Conclusions: The recruitment stage of the implementation program encountered significant contextual barriers. However, once the delivery stage began, the implement of mWorks proved to be feasible. Despite the limited scope of this study with a small number of participants, the triangulation of data suggests that both users and professionals benefited from mWorks.
Background: Degenerative meniscus findings are common in middle and older adults, and current guidelines favor nonoperative care. As patients increasingly turn to portal systems to view imaging result...
Background: Degenerative meniscus findings are common in middle and older adults, and current guidelines favor nonoperative care. As patients increasingly turn to portal systems to view imaging results and communicate with their physician, patient-facing wording may shape downstream treatment preferences and expectations. Objective: To determine whether subtle differences in physician message framing about an identical degenerative meniscus tear influences: preferred management; expectations for improvement with conservative therapy; and satisfaction when a physician recommends a different plan. Methods: A prospective, randomized, cross-section 37-question survey was distributed to U.S. lay adults recruited via Amazon Mechanical Turk. Respondents were presented with a controlled vignette, putting them in the position of a 60-year-old patient with knee pain due to a degenerative meniscus tear. Participants were randomized 1:1:1 fashion into three physician portal-message framing groups: Neutral, Degenerative, Damage. Outcomes were preferred next step in treatment, expected improvement with physical therapy, and retained satisfaction in a follow-up scenario in which the treating physician disagreed with a respondent’s treatment preference. Results: Of the 266 completed responses, 195 were included for analysis (Neutral n=67; Degenerative n=63; Damage =65). Treatment preferences differed significantly across groups (χ²(2) = 6.105, p = 0.047). The Damage group was more likely to prefer aggressive interventions (n/N=48/65, 58.7%) compared to the Neutral (n/N=36/56, 53.7%) and Degenerative groups (n/N=37/63, 58.7%). Damage framing significantly increased the odds of a respondent preferring invasive options (OR 2.20, 95% CI 1.15-4.23; p=0.012). Expectations for physical therapy success differed significantly (χ²(4)=12.27, p=0.015), with the Damage group being most pessimistic about conservative care versus the Neutral and Degenerative groups. Retained satisfaction under physician disagreement did not differ by framing group (χ²(6)=6.68, p=0.351), but did differ significantly by initial treatment preference (p=0.028), and was lowest among respondents preferring steroid injection. Conclusions: Patient-portal message framing about an identical meniscal MRI finding significantly shifted management preferences and confidence in conservative therapy. Avoiding pathologizing language may help support guideline-concordant care and reduce pessimism toward beneficial conservative therapy.
Background: Telehealth and artificial intelligence are increasingly used in specialized palliative outpatient care, offering potential benefits but facing challenges, particularly regarding user accep...
Background: Telehealth and artificial intelligence are increasingly used in specialized palliative outpatient care, offering potential benefits but facing challenges, particularly regarding user acceptance. To date, there is a lack of knowledge about the extent to which digital health applications may be transferable between different areas of palliative care. Objective: This study evaluates the transferability of concrete needs, expectations, and concerns regarding telehealth and artificial intelligence from specialized outpatient palliative care for children to specialized outpatient palliative care for adults, using the example of the PalliDoc Mobile App. Methods: Two specialized outpatient palliative care teams for adults using PalliDoc (a pediatric-origin mobile app) were surveyed employing a sequential mixed-methods approach to conduct the needs assessment: a focus group study with quantitative needs prioritization, followed by a questionnaire survey on user acceptance. 25 members from both teams, representing urban and rural care areas in Germany, participated in the focus groups. 17 responded to the questionnaire. Results: A total of 13 needs were identified within the examined care teams for adults, with functions focusing on voice input and output as well as organizational tasks being the highest priority. Unlike in pediatrics, video contacts, telemetry and electronic patient-reported outcome measures are neither used here now nor intended to be used in the future. The identified concerns predominantly addressed the potential risk of artificial intelligence–assisted documentation altering or distorting healthcare professionals’ perception of patient-related information. Conclusions: Cross-setting telehealth applications may work but are no “plug-and-play solution”. Needs and concerns in each setting should be addressed to guarantee customized services. Clinical Trial: This study is registered in the German Register of Clinical Trials under the ID DRKS00036054 (https://www.drks.de/search/de/trial/DRKS00036054/details).
Background: Ask any educator, and they will respond that engagement is an important factor in their teaching. However, engagement is a complex, multidimensional construct comprising behavioural, cogni...
Background: Ask any educator, and they will respond that engagement is an important factor in their teaching. However, engagement is a complex, multidimensional construct comprising behavioural, cognitive, emotional, and agentic dimensions. Despite growing interest in this area, the conceptualisation and measurement of engagement in medical education remain inconsistent. Objective: This systematic review aims to examine how engagement is defined, conceptualised, and measured in studies involving medical students. Methods: A systematic literature search was conducted in February 2025 across five databases for peer-reviewed studies published within the last decade. Studies were included if they focused on medical students, collected original data, and measured engagement within the context of a medical curriculum. Data extraction and screening were performed independently by two reviewers following PRISMA guidelines. Studies were analysed for their conceptual framework, dimensions of engagement measured, data collection methods, and study design. Results: A total of 26 studies that met the eligibility criteria were included in this systematic review. Most studies measured behavioural (n=21), cognitive (n=19), and emotional engagement (n=17), while agentic engagement was least frequently measured (n=4). Most studies employed a quantitative approach, using survey instruments (n=14) and engagement metrics (n=5) to measure engagement, while a small number of studies adopted a qualitative approach, including interviews (n=4) and observations (n=4) to measure engagement. Engagement was mainly measured as a multidimensional construct, but some studies treated it as a unidimensional construct Conclusions: Engagement remains inconsistently and often poorly defined, as evidenced by the exclusion of more than half of initially screened studies for lacking rigorous measurement of engagement. The rise of technology-driven interventions has led to an increasing interest in ensuring that students are engaged in learning to achieve the desired learning outcomes successfully. Future research should systematically incorporate behavioural, cognitive, emotional, and agentic engagement dimensions to advance understanding and enhance educational practices. Clinical Trial: Not applicable
Background: Hospital admission is associated with increased sedentary behavior and low levels of physical activity. Hospitals have developed several strategies and interventions to address this unwant...
Background: Hospital admission is associated with increased sedentary behavior and low levels of physical activity. Hospitals have developed several strategies and interventions to address this unwanted inactivity and increase patient movement during admission. Self-monitoring of physical activity is a promising approach to support activity during hospital stays. Objective: This study investigated whether providing patients real time-physical activity feedback, compared to receiving no real time physical activity feedback, supported patients in maintaining activity levels in the cardiology ward. Methods: A Hybrid Type 2 interrupted time series design was applied. In Phase 1 (24 weeks), patients wore accelerometers (PAM AM400) with data visible only to healthcare professionals. In Phase 2 (24 weeks), self-monitoring was introduced using a ward-based screen that provided patients real-time feedback on daily physical activity. Implementation outcomes were evaluated within the RE-AIM framework, with “Maintenance,” defined as daily physical activity trends over time, serving as the primary outcome. The other RE-AIM dimensions (Reach, Effectiveness, Adoption, and Implementation) were assessed as secondary outcomes. Results: A total of 159 patients were included (75 in Phase 1, 84 in Phase 2). Daily physical activity levels were expressed as active minutes per day. No significant immediate change in daily activity occurred at the start of Phase 2 versus the end of Phase 1 (β = –0.127, p = 0.811). In Phase 1, physical activity declined statistically significantly over time (β = –0.002, p < 0.001; ~6% decrease per month). In Phase 2, following introduction of the self-monitoring intervention, this decline was no longer observed, and activity levels were maintained. A significant phase interaction (β = 0.002, p = 0.027) confirmed stabilization of physical activity levels in Phase 2. Secondary RE-AIM outcomes did not differ between phases. Conclusions: The decline observed when only healthcare professionals accessed the data was no longer present once patients could monitor their own physical activity. Although seasonal influences cannot be excluded, these findings suggest that patient self-monitoring may support the maintenance of physical activity during hospital stays. Sustainability is complex, and determining the effect of patient self-monitoring alone remains challenging. Larger studies are needed to confirm these results. Clinical Trial: Trial registration was not required.
Background: The increasing adoption of Artificial Intelligence (AI) in healthcare, particularly within Clinical Decision Support Systems (CDSSs), is transforming clinical practice and decision-making....
Background: The increasing adoption of Artificial Intelligence (AI) in healthcare, particularly within Clinical Decision Support Systems (CDSSs), is transforming clinical practice and decision-making. Although AI-CDSSs hold the potential to improve diagnostic accuracy, operational efficiency, and patient outcomes, their implementation also creates ethical, technical, and regulatory concerns, affecting healthcare professionals’ willingness to adopt these systems. Objective: Building on a value-based perspective, the study integrates the Unified Theory of Acceptance and Use of Technology (UTAUT) framework as determinants of perceived benefits and a risk-based perception model as determinants of perceived risks to develop a unified model exploring clinicians’ behavioural intention to adopt AI-enabled CDSSs. Methods: A self-administered cross-sectional survey was distributed to licensed healthcare professionals to examine how validated factors influence perceptions of risks and benefits. Responses were collected from 215 clinicians across Italy and the United Kingdom. Recruitment was undertaken using email invitations, attendance at academic conferences, and direct approaches within healthcare settings. Results: Perceived Benefits were found to be the strongest positive predictor of clinicians’ intentions to use AI-enabled CDSSs (β=.45, p<.001), whereas perceived risks had a significant negative effect (β=-.18, p=.002). Performance Expectancy and Facilitating Conditions significantly increased the adoption intentions, whereas Effort Expectancy and Social Influence were not significant. Among the risk antecedents, Perceived Performance Anxiety, Communication Barriers, and Liability Concerns were significant predictors of Perceived Risks. The model explained 46% of the variance in the intention to use AI-enabled CDSSs. Conclusions: The findings offer theoretical and practical insights into human factors influencing AI adoption in clinical practice, underscoring the importance of value alignment, professional accountability and institutional readiness, and highlighting the need to foster clinician trust in AI tools beyond the boundaries of technical performance.
Background: The COVID-19 pandemic significantly increased adoption of virtual care, including patient-to-provider secure messaging. However, this surge has heightened physician workload and burnout an...
Background: The COVID-19 pandemic significantly increased adoption of virtual care, including patient-to-provider secure messaging. However, this surge has heightened physician workload and burnout and has raised concerns about message appropriateness and liability among physicians. Objective: This study characterizes secure messaging use in Canadian hospital-based specialty care and explores the experiences of healthcare providers, administrative staff, and patients. Methods: We employed a convergent mixed-methods design, analyzing aggregated electronic health record (EHR) usage data and qualitative interview data. The study was conducted at Women’s College Hospital in Toronto, Canada, across four high-messaging specialty clinics: mental health, rheumatology, dermatology, and surgery. Quantitative data (Oct, 2019-Oct, 2022) detailing message volumes, response patterns, and timing. Semi-structured interviews explored messaging workflows, barriers, and facilitators. Data were analyzed separately, then converged to identify areas of convergence and divergence. Results: Message volumes surged post-pandemic, particularly in mental health. The monthly message rate per patient varied, with higher rates in mental health and rheumatology. Physicians reported negative experiences due to increased workload, lack of compensation, and inadequate integration into clinical workflows. High patient-to-physician ratios and limited nursing support for message triage were associated with a poor messaging experience. Patients and administrative staff valued messaging for its convenience, accessibility, and efficiency. A key finding was the poor engagement of all user groups in decisions regarding messaging implementation. Conclusions: The study highlights a disconnect between the high perceived value of secure messaging for patients and administrative staff and the negative experiences of physicians. Successful implementation requires thoughtful integration into care models, clear guidelines for patient use, and proper triage and "channel management" to guide patients to appropriate visit modalities. Future research should explore triaging algorithms as part of a digital front door, specialty-specific variations and the crucial role of nursing staff in message management.
Background: Comparative genomics is essential for understanding evolutionary relationships, yet visualizing and analyzing circular genomes like plasmids and genomes of mitochondria or chloroplasts rem...
Background: Comparative genomics is essential for understanding evolutionary relationships, yet visualizing and analyzing circular genomes like plasmids and genomes of mitochondria or chloroplasts remains challenging. Current software often relies on fragmented, single-algorithm approaches that struggle to efficiently capture the complex architecture of non-coding regions and structural rearrangements. Objective: To address these limitations, we developed the Circular Genome Comparison Tool (CGCT), a hybrid platform designed to integrate global and local alignment strategies. This tool aims to provide a robust, interactive visualization of circular genomes, resolving both large-scale synteny and fine-scale nucleotide divergence in coding and non-coding regions. Methods: CGCT is implemented as a stand-alone Python-based desktop application that requires no external runtimes or internet connection. It employs a novel hybrid pipeline combining an improved progressiveMauve for global synteny, SibeliaZ for local topological adjacency, and BLASTn for sequence sensitivity, all accessed through an interactive visual interface for dynamic analysis and high-resolution export. Results: Validation on mitochondrial, plasmid, and chloroplast datasets showed CGCT effectively "sutures" circular topologies, and reveals hidden "pseudogene-gene graveyards" and ORFs not properly recognized by BLAST+. The hybrid approach resolved complex features like the mitochondrial D-loop and deep evolutionary homology in plant chloroplasts where single-algorithm methods frequently were insufficient. Conclusions: In conclusion, CGCT bridges the gap between global structure and local sensitivity, offering a comprehensive solution for circular genome analysis. By layering multi-algorithmic outputs into a single topology-aware framework, it enables researchers to reconstruct accurate evolutionary narratives and discover novel features without requiring advanced bioinformatics expertise.
Background: Myopia is a growing global public health concern, with particularly high prevalence among school-aged children in East and Southeast Asia and increasing risk of sight-threatening complicat...
Background: Myopia is a growing global public health concern, with particularly high prevalence among school-aged children in East and Southeast Asia and increasing risk of sight-threatening complications in high myopia. Early identification of premyopia is critical for timely intervention, yet current screening methods rely on specialized equipment or static imaging and fail to capture dynamic near-work behaviors, limiting accessibility and scalability. Therefore, an accessible and behavior-aware screening approach is urgently needed. Objective: To validate a smartphone-based machine learning (ML) method for home myopia screening in school-aged children, focusing on translational utility in resource-limited settings and premyopia detection, addressing gaps in static tools. Methods: A total of 150 school-aged children (6–18 years) were enrolled for ML model training/validation, with 54 additional eyes for preliminary external testing. Sample size was justified via power analysis. Smartphone-acquired features included age, sex, pupil distance, eye-screen distance, and cohesion angle. Pixel-to-distance calibration and measurement repeatability were validated. Stratified tenfold repeated cross-validation and bootstrapping assessed model stability. ML models predicted spherical equivalent (SE) and classified myopia (SE≤-0.50 D) vs. premyopia (SE: -0.50 D to +0.75 D); SHAP quantified feature importance. Results: Participants (mean age 9.24 ± 2.23 years) had a 61.3% myopia rate. Eye-screen distance was the top feature (importance=1.00). Random forest performed best: SE prediction (test set: R²=0.523, 95% CI 0.237–0.802; MAE=0.686 D, 95% CI 0.480–0.890) and myopia classification (test set: AUC=0.855, 95% CI 0.716–0.976; accuracy=0.779). Bootstrapped CV <10% confirmed stability. Intra-session ICC for eye-screen distance and cohesion angle was 0.91 and 0.89, respectively, indicating excellent repeatability. Conclusions: This smartphone-based ML method reliably screens for myopia/premyopia at home, with strong translational potential for national myopia control programs, especially in resource-limited regions. Multicenter longitudinal studies will enhance generalizability and clinical translation.
Background: Quality of life (QoL) plays a crucial role in dementia care, yet QoL and its dynamic, context-dependent nature can be difficult to capture in people living with dementia due to challenges...
Background: Quality of life (QoL) plays a crucial role in dementia care, yet QoL and its dynamic, context-dependent nature can be difficult to capture in people living with dementia due to challenges in memory and communication and limitations of self-reported QoL instruments. Observational tools such as the Maastricht Electronic Daily Life Observation (MEDLO) provide narrative descriptions of the daily life of people living with dementia in nursing homes. However, the MEDLO tool was not developed to assess QoL specifically, and it remains unclear to what extent its narrative descriptions reflect aspects of QoL. Analysing these narrative descriptions is labour-intensive and time-consuming. Recent advances in natural language processing (NLP), including Large Language Models, offer potential to analyse these narrative descriptions at scale. Objective: The study aims to gain insight into the QoL in people living with dementia residing in nursing homes in the Netherlands, using NLP to interpret narratives of daily life in existing MEDLO data. Methods: This study conducted a secondary analysis of existing MEDLO observational data from 151 people living with dementia residing in Dutch long-term care. Narrative data had been documented by trained observers, describing activities, interactions, settings and emotional expressions. For analysis, a local secure pipeline was developed in which GPT-4o-mini was deployed for NLP tasks. The pipeline comprised three analytical steps: (1) N-gram frequency analysis to identify common language patterns, (2) sentiment analysis of positive and negative expressions per QoL domains, and (3) topic modelling to group semantically related terms and map them to QoL domains. Outputs were iteratively refined through prompt engineering and validated through expert review for coherence and contextual relevance. Results: A total of 5,622 narratives (50,106 words) from 151 observed people living with dementia were analysed. The narratives were short, averaging 8.5 words per narrative. N-gram frequency analysis identified frequent documentation of passive activity (sits at the table) in limited indoor settings (living room). Emotional well-being was often described in positive terms (smiles, laughs), whereas explicitly negative expressions (cries, distress) occurred less frequently. Weighted sentiment analysis showed that, although fewer in number, negative expressions carried a stronger intensity, resulting in an overall predominance of negative sentiment across all QoL domains. Topic modelling identified eight coherent clusters, most of which mapped onto multiple QoL domains, underscoring QoL’s multidimensionality. Conclusions: NLP identified predominantly passive activities in little varying indoor settings, yet people living with dementia were often described with positive affect, underscoring both the complexity of QoL in dementia and the influence of documentation practices. In practice, NLP could help translate everyday care documentation into actionable information that guides more responsive, person-centred dementia care.
Background: As psychological practice becomes increasingly digitalized, the demand for competencies in digital psychology is growing. Although competency frameworks for digital clinical practice exist...
Background: As psychological practice becomes increasingly digitalized, the demand for competencies in digital psychology is growing. Although competency frameworks for digital clinical practice exists, validated instruments to assess these competencies remain scarce. In Sweden, psychology master’s students are now being offered courses in digital clinical psychology, increasing the need for instruments to measure intended improvements in knowledge and abilities. Using artificial intelligence (AI) to assist translation procedures can facilitate the adaptation of existing instruments to new national and cultural context. Objective: To test an AI-assisted procedure for the translation and contextual adaptation of the Digital Competencies for Applied Psychological Practitioner (DCAPP) scale to Swedish. To examine the psychometric properties of the translated version on a sample of psychology master’s students in Sweden, including pilot testing of the instruments responsiveness to change in knowledge and abilities among students attending a course in digital clinical psychology. Methods: An AI-assisted adaptation procedure, using ChatGPT and DeepL, was used to translate the DCAPP from English to Swedish. The Swedish version was distributed to psychology master’s students during their eighth semester, including those attending an elective course in digital clinical psychology. Twenty-four students completed the baseline measurement. Nine out of 14 students that attended the course also provided data at follow-up. Item descriptives, internal consistency and responsiveness to change were calculated for the scale. Results: The AI-assisted translation procedure resulted in a translated version of the scale with both high quality and semantic similarity ratings. The Swedish DCAPP demonstrated excellent internal consistency for total score (α = .96), and also for knowledge (α = .93) and ability (α = .96) subscales. It demonstrated acceptable item distributions with item-total correlation above .30 (range: 0.53-0.87) and mean-item correlation for subscales were acceptable but indicated potential item redundancy (Knowledge r=.48; Abilities r=.61). Whilst skewness and kurtosis values were mostly acceptable, high floor effects were observed in both subscales. A statistically significant increase in students’ competency ratings was observed at post-test (P<.001), suggesting good sensitivity to change. Conclusions: Using an AI-assisted adaptation procedure to support translation is feasible. The Swedish DCAPP showed promising psychometric properties and preliminary evidence of responsiveness to change. Floor effects may have been due to students limited digital competencies. Although initial results are promising, further research with larger samples is needed before the Swedish DCAPP’s psychometric validity can be confirmed.
Background: Smartwatches can be of added value in mental healthcare, by giving insights into activity and sleep of patients, which are fundamental aspects of daily functioning that are strongly linked...
Background: Smartwatches can be of added value in mental healthcare, by giving insights into activity and sleep of patients, which are fundamental aspects of daily functioning that are strongly linked to mental health. However, their implementation in mental healthcare practice remains limited. Professionals can feel resistance towards digital mental health interventions if they feel their use is not aligned with therapeutic values, and report a need for guidance on how to use technologies in ways that do align with such values. Compassion, a core value in mental healthcare, may provide a meaningful frame for implementation. Therefore, we previously co-designed compassion-focused implementation materials: a card set offering practical suggestions of how smartwatch data can support group treatments in ways that counter the self-optimization logic of commercial devices and instead align with compassion. Objective: The current study evaluated the compassion-focused card set in practice, to explore whether the introduction of the card set influenced the use of smartwatches, experienced compassion when using the smartwatches, the therapeutic alliance and the acceptance of smartwatches in social psychiatric nurses. Methods: The card-set was evaluated in a mixed-methods replicated single-case design with five social psychiatric nurses from an acute mental healthcare team. Data collection included pre- and post-questionnaires, repeated measures, a focus group, and interviews. Results: Quantitative results showed no consistent significant improvements in compassion, therapeutic alliance, or the acceptance of smartwatches. However, smartwatch use started or increased temporarily after the introduction of the card set. Qualitative findings indicated that the card set was experienced as flexible and easy to use, supporting session structure and enabling more in-depth, compassionate conversations. At the same time, barriers to sustained smartwatch integration included low patient uptake, challenges in mixed groups in which only some patients wore the smartwatch, and varying digital affinity among professionals. Conclusions: These findings suggest that compassion-focused materials may trigger initial adoption and help reframe smartwatch use in line with therapeutic values. Broader implementation strategies, including further training and tailoring to patient readiness, are required for sustainable integration.
Background: Traditional Problem-Based Learning (PBL) in pediatric nursing education often uses static cases and lacks personalized, real-time feedback. The integration of generative AI like ChatGPT co...
Background: Traditional Problem-Based Learning (PBL) in pediatric nursing education often uses static cases and lacks personalized, real-time feedback. The integration of generative AI like ChatGPT could address these limitations, yet its systematic application in nursing internships remains understudied. Objective: To explore the effectiveness and feasibility of a ChatGPT-assisted Problem-Based Learning (PBL) model in pediatric nursing undergraduate internship education, providing empirical evidence for artificial intelligence(AI) nursing education integration. Methods: A single-center, assessor-blinded randomized controlled pilot study was conducted. Eighty-four interns were randomly assigned to the ChatGPT-PBL group (n=42) or traditional PBL group (n=42) at a 1:1 ratio. Based on traditional PBL, the experimental group integrated ChatGPT-4 to construct a "instructor-student dual-layer" supported PBL teaching framework, including dynamic generation of personalized clinical cases, provision of real-time operational feedback, and decision-making simulation training. The traditional PBL group received standardized traditional PBL teaching. The intervention lasted for 4 weeks. The primary outcome measures included theoretical assessment scores, Objective Structured Clinical Examination (OSCE) scores, Chinese Version of Critical Thinking Disposition Inventory (CTDI-CV) scores, Holistic Clinical Assessment Tool for Nursing Undergraduates (HCAT) scores, and teaching satisfaction. Results: Post-intervention, the theoretical score of the ChatGPT-PBL group was significantly higher than that of the traditional PBL group (82.76±5.02 vs 71.88±5.88, P<0.001). The ChatGPT-PBL group also showed significant advantages over the traditional PBL group in OSCE total score (43.24±2.75 vs 36.99±3.71, P<0.001), CTDI-CV total score (60.14±5.21 vs 49.87±5.74, P<0.001), and HCAT total score (51.14±3.46 vs 41.88±4.71, P<0.001). The overall satisfaction rates of the ChatGPT-PBL group with Instructors, teaching plans, and teaching content were 90.5%-95.2%, which were significantly higher than those of the traditional PBL group (64.3%-71.4%,<0.05). Conclusions: The ChatGPT-assisted PBL teaching model significantly improves the theoretical knowledge level, specialized operational skills, critical thinking ability, and clinical nursing competence of pediatric nursing undergraduate interns, with higher teaching satisfaction. It provides a replicable practical paradigm for the in-depth integration of AI and pediatric nursing education, and holds important clinical application and promotion value. Clinical Trial: The study protocol was registered in the Chinese Clinical Trial Registry (ChiCTR2500114150) .
Background: Digital technology is increasingly being used to deliver interventions and initiatives to support the wellbeing of older adults. However, few studies have conducted needs assessments to id...
Background: Digital technology is increasingly being used to deliver interventions and initiatives to support the wellbeing of older adults. However, few studies have conducted needs assessments to identify the future wellbeing service requirements of an older adult population and their preferred modes of delivery, whether via digital technology or in-person, or a combination of both (ie, a hybrid model). Objective: This study aims to investigate the requirements of a rural region in New Zealand to inform planning to meet the future wellbeing needs of its older adult population over the next 30 years Methods: In total, two focus group discussions and 10 interviews were held with participants using a combination of phone and video. A total of 33 adults aged ≥57 years participated. The participants were asked how they saw the future wellbeing needs of the older adult population evolving, the role of digital technology and/or in-person interactions to deliver wellbeing services, and perceived barriers to, and enablers of, digital technology for providing services. Focus group and interview transcripts were thematically analysed. Results: A total of 4 key wellbeing themes were identified across both focus group discussions and interviews with participants: “skills”, “services”, “spaces” and “social connection.” Each theme reflects the older adults’ interview responses in relation to questions about their demographic details and level of technology confidence. Conclusions: Results indicated that, within this rural regional population, older adults had limited understanding of, and low confidence in using digital technology. Although 57% of participants initially self-reported being very or somewhat confident using technology, most were unable to successfully engage in online focus groups. Meanwhile, digital technology is developing at a rapid pace, and as a result, we need to consider how to plan for the transition and bridge the gap identified between the current use of digital technology and its potential future use if technology is to support the older adults of the future. The findings indicate that older adults prefer to engage in-person, while trust is a barrier to digital technology use for some participants. The future offers many opportunities to support the wellbeing of individuals and communities through the application of the proposed 4 Ss Framework. Clinical Trial: N/A.
Background: Following the COVID-19 pandemic, telehealth has emerged as a potential tool to improve access to HIV prevention services like Pre-Exposure Prophylaxis (PrEP). However, data on its acceptan...
Background: Following the COVID-19 pandemic, telehealth has emerged as a potential tool to improve access to HIV prevention services like Pre-Exposure Prophylaxis (PrEP). However, data on its acceptance among PrEP users in Italy remains limited. Objective: Aim of this study is to assess attitudes toward telemedicine among PrEP users in a monocentric Italian cohort. Methods: A cross-sectional survey was conducted at a Padua University Hospital PrEP clinic from April to October 2024, consecutively recruiting 450 attendees. Participants completed an adapted, validated questionnaire evaluating willingness, perceived benefits, and concerns regarding telehealth. Associations with demographic and clinical variables were analyzed using multivariate linear regression and clustering techniques Results: The cohort was predominantly composed of men who have sex with men (MSM) (90.4%), was largely Italian (92.2%), and included 54.7% of participants under 40 years of age. Most participants (62.4%) reported using on-demand PrEP. Positive attitudes toward telemedicine were significantly associated with higher educational attainment, having a partner living with HIV, and a history of sexually transmitted infections. In contrast, older age and lack of access to appropriate communication tools were associated with lower perceived benefits and greater concerns regarding telemedicine. No significant associations were observed with distance from the hospital or nationality. Conclusions: Telehealth for PrEP delivery was widely accepted in this cohort, particularly among younger, digitally equipped MSM. The findings suggest TelePrEP could be a useful complementary tool to traditional clinic visits. However, acceptability must be further explored in more diverse and vulnerable populations to ensure equitable service delivery.
Background: While Evidence-Based Medicine (EBM) is a fundamental pillar of modern healthcare, its implementation into general practice is often hindered by time constraints, resource deficits, and the...
Background: While Evidence-Based Medicine (EBM) is a fundamental pillar of modern healthcare, its implementation into general practice is often hindered by time constraints, resource deficits, and the inherent complexity of primary care. This challenge is further exacerbated by a lack of consensus on EBM instruction, highlighting a critical need for standardized educational frameworks. Objective: To systematically synthesize intervention studies evaluating the effectiveness of EBM training, including EBM skills, and the impact of EBM on reactions, behavioral changes, attitudes, and practices among general practitioners and residents in family medicine. Methods: We conducted a systematic synthesis of interventional studies that used the Fresno test to assess EBM skills among residents or general practitioners after educational interventions (lectures, workshops, journal clubs, or e-learning program). A comprehensive search was performed across the Cochrane Library, Embase, and Medline databases for records published between January 1980 and July 2025. Study quality was assessed using the Modified Medical Education Research Study Quality Instrument (MMERSQI), and risk of bias was evaluaAmong the 200 records screened, eight studies involving 431 participants (residents and general practitioners) met the inclusion criteria. Study designs included one randomized controlled trial, six before–after studies, and one cross-sectional study. Mean methodological quality (MMERSQI) was 65.3 (SD 7.2). One study had a low risk of bias, five had a moderate risk, and two were rated as presenting with a high risk of bias, mainly due to confounding factors and selection into analysis. Six studies reported significant improvement in Fresno test scores after training, with mean score increases ranging from 4% to 60% (p<0.05), and two found no significant change. The greatest benefits were achieved after interactive or clinically integrated sessions combining lectures, workshops, or journal clubs. Participants reported higher confidence in applying EBM (+3.2 points on the Likert scale) and greater engagement with research (+2.5 hours of reading and 3.5 additional articles per week). ted using RoB 2 for randomized studies and ROBINS-I v2 for non-randomized studies. Owing to study heterogeneity, results were synthesized qualitatively. Results: Among the 200 records screened, eight studies involving 431 participants (residents and general practitioners) met the inclusion criteria. Study designs included one randomized controlled trial, six before–after studies, and one cross-sectional study. Mean methodological quality (MMERSQI) was 65.3 (SD 7.2). One study had a low risk of bias, five had a moderate risk, and two were rated as presenting with a high risk of bias, mainly due to confounding factors and selection into analysis. Six studies reported significant improvement in Fresno test scores after training, with mean score increases ranging from 4% to 60% (p<0.05), and two found no significant change. The greatest benefits were achieved after interactive or clinically integrated sessions combining lectures, workshops, or journal clubs. Participants reported higher confidence in applying EBM (+3.2 points on the Likert scale) and greater engagement with research (+2.5 hours of reading and 3.5 additional articles per week). Conclusions: EBM training for residents and general practitioners improves both knowledge and practical application of evidence-based skills, particularly when it is interactive or clinically integrated. Evidence remains limited regarding long-term retention and patient-related outcomes.
Background: The current global breastfeeding landscape presents both progress and challenges. The rise of artificial intelligence (AI) has emerged as a promising new strategy to enhance breastfeeding...
Background: The current global breastfeeding landscape presents both progress and challenges. The rise of artificial intelligence (AI) has emerged as a promising new strategy to enhance breastfeeding practices. Objective: To evaluate the impact of AI-driven tools on breastfeeding practices and outcomes. Methods: We searched PubMed, Web of Science, Cochrane Library, Embase, and CINAHL from inception to October 2025 for randomized controlled trials (RCTs) and quasi-experimental studies. The risk of bias in individual studies was assessed using the Cochrane risk of bias tool for randomized controlled trials (RoB 2) and the risk of bias in non-randomized studies of interventions tool (ROBINS-I). Data were extracted independently by two reviewers and combined using Review Manager 5.4 and R-4.5.2 to obtain pooled results via random-effects models, with subgroup analyses based on intervention type, timing of implementation, population characteristics, and country income level. Results: This review included 39 studies with 10735 participants from 15 countries. AI-driven tools increased exclusive breastfeeding (EBF) rates (at <3 months: relative risk [RR] 1.21, 95% CI 1.13-1.29; P<.001, I²=56%; at 3–6 months: RR 1.54; 95% CI 1.29-1.85; P<.001, I2=69%; at ≥6 months: RR 1.47, 95% CI 1.22-1.77, P<.001, I2=78%), breastfeeding self-efficacy (BSE) (standardized mean difference [SMD] 0.41, 95% CI: 0.04-0.78; P=.03, I2=93%), and breastfeeding knowledge (SMD 1.69; 95% CI: 0.54-2.84, P=.004, I2=98%). Conclusions: AI-driven tools effectively increase exclusive breastfeeding rates, breastfeeding self-efficacy, and breastfeeding knowledge. Future studies are needed to provide stronger evidence about clinical care interventions. Clinical Trial: PROSPERO CRD420251233352; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251233352
Background: Parkinson's clinical trials depend on patient-reported outcomes, often overlooking the vital role of carers in collaboratively tracking symptom progression. This is a potential limitation...
Background: Parkinson's clinical trials depend on patient-reported outcomes, often overlooking the vital role of carers in collaboratively tracking symptom progression. This is a potential limitation for decentralized clinical trials aimed at measuring real-world, free-living symptoms with sensors, such as wearables and cameras in the home. Objective: The primary objective of our study was to inform the design of a multimodal sensor platform for decentralised clinical trials. Methods: A qualitative study was conducted with an inductive approach using semistructured interviews with a cohort of people with Parkinson's. Results: This study of 18 participants (14 people diagnosed with Parkinsons, 4 spouses/informal carers) found that carers, household members, and peers take a central role in helping people with Parkinson’s make sense of and manage their symptoms. Our participants relied on others to help with completing tasks and understanding their symptoms through comparison to others, using their Carer-as-Sensor. While our participants mostly viewed their relationships with others positively, this could lead to negative impacts on oneself. Participants could prioritize household needs over their health by not taking medication or risking a chance of falling, or even avoiding being around others to prevent their Parkinson's being on display to reduce carer burden. Conclusions: Our results argue that an 'outsider' and 'insider' approach to reporting symptoms can identify symptoms that are not noticed by people with Parkinson's, or withheld from carers. These form household-centred recommendations more broadly for the design of tracking and annotation strategies in the context of decentralised clinical trials and new innovations in AI to support the capture of nuanced and subtle changes in symptoms.
Background: Pediatric-onset multiple sclerosis (POMS) is a chronic, progressive neurologic condition requiring lifelong management and coordinated transition from pediatric to adult care. Evidence-bas...
Background: Pediatric-onset multiple sclerosis (POMS) is a chronic, progressive neurologic condition requiring lifelong management and coordinated transition from pediatric to adult care. Evidence-based guidelines identify transition readiness assessment as a core component of successful transition; however, most POMS clinics do not formally assess readiness, and existing tools do not address POMS-specific challenges, such as fluctuating disability, complex treatment regimens, and cognitive impairment. This gap underscores the need for a transition readiness measure tailored to POMS. Objective: To describe a stakeholder-engaged, implementation science–guided protocol for adapting the Transition Readiness Assessment Questionnaire (TRAQ) to reflect the unique developmental and clinical needs of youth with POMS. Methods: Using adaptation and participatory research as our guiding implementation strategies, surveys will be administered to patients, caregivers, and clinicians to identify barriers and facilitators to transition to adult care and define essential self-management competencies. Survey content will be informed by constructs from the Dynamic Adaptation Process framework and existing TRAQ domains. Identified competencies will be refined using a Delphi consensus process. A multidisciplinary focus group of 8–10 collaborators will review the adapted measure to assess clarity, relevance, and perceived clinical utility. Results: This project will generate a consensus-driven set of POMS-specific transition competencies and systematically adapt the TRAQ to the POMS population. Conclusions: This protocol outlines a rigorous, easily replicable approach to adapting a validated transition readiness measure to POMS. The adapted TRAQ will support evidence-based transition planning and inform future psychometric testing and implementation research to improve the care of POMS patients as they age.
Background: Gestational Diabetes is a disorder characterised by hyperglycaemia that is first recognised during pregnancy due to a mismatch between the placental hormones produced, causing insulin resi...
Background: Gestational Diabetes is a disorder characterised by hyperglycaemia that is first recognised during pregnancy due to a mismatch between the placental hormones produced, causing insulin resistance. Few studies have systematically examined the metabolic pathways responsible for gestational diabetes from diagnosis to the postnatal period, including the metabolic changes in the placenta through metabolomics studies. Objective: This study aimed to evaluate the metabolites identified through metabolomics, as well as their associated pathways, responsible for hyperglycaemia in the gestational period across pregnancy and postpartum, compared to those without diabetes during pregnancy. Serum samples are taken at 24-28 weeks of gestation, followed by placental samples, cord blood, and postnatal serum samples between 4 and 12 weeks. Methods: Anthropometric data is collected at the first visit. Samples are collected at three points: serum or plasma samples at the time of diagnosis of gestational diabetes or the first visit after diagnosis, placental samples and cord blood samples during delivery, and during the postpartum period.
Macroscopic and microscopic features of the placenta are noted. The metabolic pathways between GDM and non-GDM mothers across pregnancy, starting from the diagnosis of gestational diabetes, and the changes in pathways in the placenta, cord blood, and postnatal blood will be compared. Results: The study was funded by an institutionally funded research grant in January 2025. Recruitment for the study began in June 2025 and is expected to be completed by June 2026. We plan to recruit 40 patients with GDM and age- and BMI-matched normoglycaemic controls. Conclusions: The findings from this study will provide insight into the various metabolites or biomarkers and their metabolic pathways involved in the pathogenesis of gestational diabetes across the life course of mothers, compared with those of normoglycemic mothers, and offer potential insight into the role of the placenta in gestational diabetes.
Background: Stroke represents a leading cause of global disability and mortality. In acute stroke patients, tracheotomy is often required for survival during the critical phase; however, weaning from...
Background: Stroke represents a leading cause of global disability and mortality. In acute stroke patients, tracheotomy is often required for survival during the critical phase; however, weaning from the tracheostomy tube remains a major challenge in the recovery period. Prolonged dependence on the tube considerably impairs patients' quality of life. Previous research indicates that multiple environmental factors—including oxygen concentration, air humidity, and ultraviolet radiation—can influence cardiopulmonary function and airway adaptability. Moreover, high-altitude environments are known to alter hemoglobin oxygen-carrying capacity and induce adaptive genetic polymorphisms. Based on these observations, we hypothesize that residents living at different altitudes may demonstrate varying success rates of tracheostomy tube removal. Objective: This study aims to investigate whether altitude of residence affects extubation success rates by modulating adaptive mechanisms related to hemoglobin oxygen affinity and genetic factors (EPAS1/EGLN1). We further aim to develop a predictive model integrating environmental factors, genetic polymorphisms, and clinical data for estimating extubation success. Methods: The "Extubation Success After Tracheotomy in Stroke Patients at different Altitudes" (ESTATE) study is a prospective, multi-center cohort study (August 2025–December 2028). This initiative aims to enroll 900 tracheotomized stroke patients from Chinese regions stratified by altitude. After screening against strict criteria, participants will receive baseline assessments (demographics, clinical scales, hematological tests). All will undergo standardized rehabilitation, with outcomes—including extubation status and quality of life—assessed at discharge and at 1, 3, 6, 9, and 12 months post-discharge. Results: Initiated in August, 2025, this study has enrolled 25 participants to date. Recruitment will continue through 2027, with final follow-up and data analysis to be completed in 2028. The main findings are expected to Our research team aims to conduct an in-depth investigation into the association between successful extubation and the biological and genetic adaptations resulting from long-term residence at different altitudes in tracheotomized stroke patients. This study seeks to elucidate the underlying molecular mechanisms of the exposure-response relationship, with the ultimate objective of providing novel therapeutic strategies and a solid theoretical basis for clinical practice.be submitted for publication in 2029. Conclusions: Our research team aims to conduct an in-depth investigation into the association between successful extubation and the biological and genetic adaptations resulting from long-term residence at different altitudes in tracheotomized stroke patients. This study seeks to elucidate the underlying molecular mechanisms of the exposure-response relationship, with the ultimate objective of providing novel therapeutic strategies and a solid theoretical basis for clinical practice. Clinical Trial: ClinicalTrials.gov (United States) NCT07014501; https://clinicaltrials.gov/ct2/show/ NCT07014501
Background: Brachial plexus birth injury (BPBI) occurs in approximately one of 1,000 live births resulting in long-term limitations in upper extremity function including shoulder contracture. Early in...
Background: Brachial plexus birth injury (BPBI) occurs in approximately one of 1,000 live births resulting in long-term limitations in upper extremity function including shoulder contracture. Early intervention with passive range of motion (PROM) performed by caregivers multiple times per day is commonly recommended to prevent the development of shoulder contracture. Research shows that common barriers to adherence to this daily PROM recommendation includes caregiver lack of confidence and fear of hurting their child.
Objectives: 1) determine whether caregivers who receive a Coaching training protocol for performing PROM demonstrate improved efficacy in performing PROM compared to caregivers who receive standard training; and 2) determine whether caregivers who receive a Coaching training protocol for performing PROM demonstrate improved self-confidence in performing PROM compared to caregivers who receive standard training.
Methods: This prospective, multi-site randomized clinical trial will evaluate the efficacy of a caregiver training protocol that uses principles of coaching and guided discovery to enhance confidence and problem-solving needed to overcome barriers to adherence. Caregivers of infants with BPBI will be randomized to receive either standard PROM training or the Coaching-based protocol. Caregiver efficacy, self-reported self-confidence, self-reported frequency of performing PROM, and facilitators and barriers to adherence will be compared between the two groups. Findings will be used to determine whether the Coaching protocol is superior for facilitating caregiver efficacy and confidence and subsequently supports daily PROM adherence.
Conclusion: If effective, this protocol will be integrated into a larger non-inferiority trial to assess the minimum daily frequency of PROM needed to decrease the risk of shoulder contracture. This study addresses a critical gap in evidence-based standards for early intervention for infants with BPBI and aims to improve long-term functional outcomes for affected infants and their families.
Background: Personal Data Spaces (PDS) are increasingly promoted as digital infrastructures that enable citizen participation in health data governance by strengthening transparency and individual con...
Background: Personal Data Spaces (PDS) are increasingly promoted as digital infrastructures that enable citizen participation in health data governance by strengthening transparency and individual control over personal health data. Despite growing policy and technological attention, empirical evidence remains limited on whether citizens view PDS as acceptable and desirable governance instruments, how they evaluate different types of data and purposes of data use, and which factors shape public support. Objective: The objective of this study was to examine how citizens evaluate We Are, a proposed citizen-centered Personal Data Space model in Flanders, Belgium, and to assess overall support, reasons for endorsement, preferences for control versus transparency, acceptability of storing different types of health data, and acceptance of different purposes of data use. Methods: We conducted an online survey among adults aged 18-79 years in Flanders, Belgium (N=1,041). The sample was quota-based and representative for gender, age, education, province, and urbanization level. Participants evaluated the We Are model after reading a description. Measures included overall evaluation of the model, reasons for support, preferences for transparency and control, willingness to store medical versus lifestyle data, and willingness to share data across vignette-based scenarios varying purpose of use and recipient type. Data were analyzed using t-tests, linear regression, and mixed models with repeated measures. Results: Overall evaluations of We Are were moderately positive (Mean 2.51 on a 1-4 scale) and did not differ significantly from the scale midpoint (t(1040)=0.70, P=.24). Sociodemographic characteristics explained little variance in support, whereas understanding of the We Are model and psychographic factors substantially increased explained variance (R² increased from .03 to .24). Higher trust in technology was positively associated with support, while stronger privacy attitudes and privacy-related fears were negatively associated. Respondents valued control more strongly than transparency for both general personal data (t(1040)=-10.37, P<.001) and health data (t(1040)=-12.47, P<.001). Medical data were considered more acceptable to store than lifestyle data (Δ=0.38, P<.001). Both personal and public benefits motivated support, but commercial data use reduced willingness to share, particularly when framed around individual gain rather than collective benefit. Conclusions: Citizens view PDS as potentially valuable instruments for health data governance, but their support is conditional and shaped by understanding and psychographic factors rather than by sociodemographic factors. PDS can contribute to meaningful citizen participation only when technological features are embedded in governance arrangements that provide real agency, credible safeguards, and demonstrable public value.
The European Health Data Space represents a landmark regulatory success in enabling the secondary use of health data for research, innovation, and policy within a trusted and interoperable framework....
The European Health Data Space represents a landmark regulatory success in enabling the secondary use of health data for research, innovation, and policy within a trusted and interoperable framework. This Viewpoint discusses how strategic alliances—such as UNINOVIS—and translational research ecosystems, with IBIMA as a driving hub, operationalize this regulation by aligning governance, infrastructure, and applied data science. Together, they illustrate how European health data policy can be translated into real-world evidence generation and sustained clinical and societal impact.
Background: Musculoskeletal conditions are a leading global cause of disability, yet the factors influencing long-term musculoskeletal health, particularly following trauma, remain incompletely unders...
Background: Musculoskeletal conditions are a leading global cause of disability, yet the factors influencing long-term musculoskeletal health, particularly following trauma, remain incompletely understood. Machine learning could be applied to identify previously unknown patterns in large-scale multimodal datasets. Objective: Test the ability of a new sparse Group Factor Analysis method to uncover hidden patterns in large-scale multi-modal datasets and generate testable, clinically relevant hypotheses. Methods: This study applies sparse Group Factor Analysis, a hierarchical unsupervised machine learning method, to the ADVANCE cohort—a longitudinal dataset of 1445 UK Afghanistan War servicemen—to identify latent structures in multimodal clinical data. Study 1 validated the approach by rediscovering known group-level patterns between combat-injured and non-injured participants, including poorer outcomes in pain, mobility, and bone health among those with lower limb loss. Study 2 explored the Injured, non-amputee subgroup without prespecified labels to identify new hypothesis-generating clusters that could subsequently be tested using standard hypothesis testing methods. Results: A subgroup of 125 individuals with worse musculoskeletal outcomes was uncovered. This group had greater body mass, higher injury severity, and a higher prevalence of head injury. These findings led to a novel hypothesis: that head injury, including potential traumatic brain injury, is associated with long-term musculoskeletal deterioration. This hypothesis is supported by literature in both athletic and military populations and will be tested in follow-up analyses. Conclusions: Our findings demonstrate how sparse Group Factor Analysis, combined with clinical insight, can uncover hidden patterns in large-scale datasets and generate testable, clinically relevant hypotheses that inform prevention, treatment, and rehabilitation strategies.
Background: Chronic kidney disease (CKD) requires sustained self-management involving complex medication regimens, dietary restrictions, and symptom monitoring. These demands pose substantial challenges to medication adherence and daily disease management. Digital therapeutics (DTx) have the potential to support CKD self-management; however, CKD-specific design requirements informed by both patient and clinician perspectives remain insufficiently explored. Objective: This study aimed to identify key design requirements for CKD-specific digital therapeutics by integrating patient-reported self-management challenges with nephrologist perspectives on clinical needs and implementation considerations. Methods: A convergent mixed-methods study was conducted at a tertiary academic hospital. Quantitative data were collected through a structured survey of 60 adults with non–dialysis-dependent CKD to assess medication adherence challenges, digital health needs, and age-related differences. Qualitative data were obtained through focus group interviews with 19 nephrologists and analyzed using thematic analysis. Quantitative and qualitative findings were integrated to identify convergent priorities and design implications for CKD-specific DTx. Results: None of the patients reported prior experience with CKD-specific digital health applications, although 70% perceived a need for such tools. Younger patients (<60 years) expressed significantly greater interest in digital therapeutics than older patients (83.9% vs 55.2%, P=.015). Common patient-reported challenges included managing multiple medications (36.7%), irregular medication schedules (30.0%), and difficulty understanding medication timing relative to meals (28.3%). Nephrologists emphasized the importance of personalized medication reminders, comprehensive medication information (including adverse effects and nephrotoxic risks), symptom-monitoring systems, and features supporting dietary and lifestyle management. Integration findings highlighted the need for user-friendly, age-sensitive interfaces, data security, and clinically actionable feedback mechanisms. Conclusions: By integrating patient and nephrologist perspectives, this mixed-methods study identifies key design considerations for CKD-specific digital therapeutics. These findings provide formative, design-informed evidence to guide the early development of patient-centered and clinically relevant digital therapeutics for CKD.
Background: Thailand's accelerated population aging transformation, with 28% of citizens projected to reach 60+ years by 2030, requires innovative digital health solutions addressing family-centered c...
Background: Thailand's accelerated population aging transformation, with 28% of citizens projected to reach 60+ years by 2030, requires innovative digital health solutions addressing family-centered care systems. Interconnected sensor networks, machine learning systems, and cloud-based analytics infrastructure present opportunities for revolutionizing elderly care provision, yet adoption patterns and implementation viability in Thai contexts remain underexplored. Objective: The objective of our study was to assess the viability and adoption patterns of interconnected sensors and machine learning technologies in Thai elderly care facilities, examining therapeutic effectiveness, user acceptance factors, and geographic preference variations. Methods: An integrated quantitative-qualitative methodology combining the Gerontechnology Adoption Framework (GTAF) and Service Exchange Value Creation Logic (SEVCL) was employed. Technology specialist assessments (n=12) and consumer evaluations across Bangkok and Chiang Mai (n=120) were conducted. Technology assessment followed digital health evaluation protocols incorporating user experience testing, data protection impact analysis, and healthcare workflow integration assessment. Quantitative examination included descriptive analytics, predictive modeling, and multi-criteria evaluation techniques, while qualitative information underwent systematic thematic examination. Results: Sensor-based fall prevention systems achieved superior therapeutic effectiveness scores (M=4.5/5.0, SD=0.3) with 89% adoption success metrics and favorable deployment complexity (M=2.8/5.0), demonstrating potential 25-30% emergency response cost reductions. Machine learning-powered early alert systems showed greatest clinical impact capability (M=4.7/5.0) with 30-35% hospitalization reduction potential and 76% user adoption despite deployment complexity (M=4.2/5.0). Digital health acceptance varied significantly by digital literacy levels, with high digital confidence participants showing 2.3x higher acceptance rates (p<0.001). Therapeutic gardens emerged as optimal sustainable intervention (M=4.8/5.0 benefit rating) correlating with 17% psychotropic medication reduction (r=0.78, p<0.001). Geographic preferences revealed Bangkok's preference for medical IoT technologies opposed to Chiang Mai's environmental digital solutions emphasis. Conclusions: Integrated smart technology implementation demonstrates simultaneous clinical outcome improvement and operational efficiency enhancement when properly configured for older adult populations. Success factors including phased IoT deployment, comprehensive digital health training, and human-technology balance respecting cultural values provide a systematic implementation framework for digital health transformation in elderly care settings across developing nations. Clinical Trial: -none-
Background: Community-based interventions represent a strategic approach to integrating the salutogenic model, involving multi-sectoral stakeholders to improve the health of specific communities. In t...
Background: Community-based interventions represent a strategic approach to integrating the salutogenic model, involving multi-sectoral stakeholders to improve the health of specific communities. In the context of the EU’s ageing population, eHealth technologies provide valuable solutions by improving older individuals' health and well-being through better access to knowledge, strengthening environmental relationships, and supporting the sustainability of health systems. This manuscript explores a community-based health promotion intervention focused on eHealth apps tested across six European regions within the GATEKEEPER project. Objective: Explore community-based health promotion interventions focused eHealth apps across six European regions within the GATEKEEPER project. Methods: An observational studies were conducted in the European regions of Basque Country (Spain), Aragon (Spain), Saxony (Germany), Puglia (Italy), Lodz (Poland), and Central Greece and Attica (Greece). Qualitative techniques were used to evaluate the implementation process, effectiveness of the community-based intervention, and user experience with the offered technologies. Results: Several factors influenced the success of the interventions, including the customisation and adaptation of applications to users' specific needs, the provision of incentives to promote engagement, and the support from health and community professionals. Customising apps to be user-friendly and culturally relevant ensures accessibility for diverse populations, while adaptation addresses varying levels of health literacy and digital skills. Continuous support from professionals fosters trust, reduces barriers to adoption, and promotes sustained engagement. Conclusions: This study provides insights into factors influencing adherence to digital health interventions. Understanding adherence, intention to use, and dropout rates is essential for identifying the factors contributing to the limited effectiveness of digitally-enabled real-world interventions. The findings stress the importance of co-designing interventions, ensuring user involvement from the beginning, which improves the alignment of technology with users' needs and increases engagement and effectiveness.
Background: Objective: To evaluate the efficacy of digital exercise therapy for pain relief in osteoarthritis (OA) patients.Methods: We conducted a systematic search of multiple databases for randomiz...
Background: Objective: To evaluate the efficacy of digital exercise therapy for pain relief in osteoarthritis (OA) patients.Methods: We conducted a systematic search of multiple databases for randomized controlled trials. Pain intensity was analyzed as the standardized mean difference (SMD) using a fixed-effects model in Stata. Methodological quality was assessed with the Cochrane RoB 2 tool. Results: Six trials (587 participants) were included. Digital exercise therapy significantly reduced pain (SMD = -0.28, 95% CI: -0.44 to -0.11; P = 0.001) with low heterogeneity (I² = 22.4%). Sensitivity analyses supported robustness. Conclusion: Digital exercise therapy significantly alleviates pain in OA. Despite limitations inherent to behavioral trials, it represents a viable and accessible treatment. Further large-scale, long-term trials are needed. Objective: Objective: To evaluate the efficacy of digital exercise therapy for pain relief in osteoarthritis (OA) patients. Methods: Methods: We conducted a systematic search of multiple databases for randomized controlled trials. Pain intensity was analyzed as the standardized mean difference (SMD) using a fixed-effects model in Stata. Methodological quality was assessed with the Cochrane RoB 2 tool. Results: Results: Six trials (587 participants) were included. Digital exercise therapy significantly reduced pain (SMD = -0.28, 95% CI: -0.44 to -0.11; P = 0.001) with low heterogeneity (I² = 22.4%). Sensitivity analyses supported robustness. Conclusions: Conclusion: Digital exercise therapy significantly alleviates pain in OA. Despite limitations inherent to behavioral trials, it represents a viable and accessible treatment. Further large-scale, long-term trials are needed. Clinical Trial: PROSPERO (CRD420251082911).
Background: Acute appendicitis is a common disease process typically requiring surgery, yet the workflow linking diagnostic imaging to surgical consultation varies substantially across emergency depar...
Background: Acute appendicitis is a common disease process typically requiring surgery, yet the workflow linking diagnostic imaging to surgical consultation varies substantially across emergency departments. Delays between imaging completion and consult acquisition may prolong care and contribute to avoidable clinical and operational inefficiencies. Objective: This study quantified real-world consultation delays for patients with radiology-confirmed appendicitis and evaluated the cumulative time impact on emergency department workflow. Methods: We performed a retrospective observational study of emergency department encounters from January 1, 2020, through December 31, 2025, in which abdominal imaging was obtained to evaluate possible appendicitis. All radiology impressions were manually adjudicated and classified as positive, indeterminate, or negative. The primary timing measure was the interval from imaging completion to surgical consultation order. Mann–Whitney U tests compared delays across imaging modalities and age groups. Logistic regression assessed predictors of prolonged delay (>30 minutes). Results: Among 1,422 encounters, 566 were classified as radiology-positive appendicitis. Surgical teams evaluated 565 of these patients (99.8 percent), demonstrating that positive radiology findings nearly always resulted in surgical involvement regardless of documentation of a formal consult order. Among 524 radiology-positive encounters with complete timestamps in a predefined plausible window (−60 to +360 minutes), the median time from imaging completion to consultation was 30.8 minutes (IQR 17.8–48.5). Delays were longer for CT than ultrasound (median 34.9 vs 21.2 minutes; p < 0.0001). CT was associated with prolonged delay (OR 2.29; 95% CI 1.08–4.86), while age group was not. Across a typical year, cumulative waiting time totaled approximately 58 patient-hours. Conclusions: Radiology-confirmed appendicitis reliably triggered surgical evaluation, yet meaningful delays remained. Standardizing and automating consult activation for clear radiologic diagnoses may reduce avoidable workflow variation and improve the timeliness of surgical care.
Background: Digital multidomain interventions hold promise for dementia risk reduction; however, populations at higher dementia risk, including those experiencing socioeconomic and educational disadva...
Background: Digital multidomain interventions hold promise for dementia risk reduction; however, populations at higher dementia risk, including those experiencing socioeconomic and educational disadvantage, remain underrepresented in trials, and engagement with digital interventions often declines over time. Co-production and blended models that combine digital tools with human support may improve reach, acceptability, usability, and sustained engagement. Designing interventions that are usable and acceptable for individuals facing structural, educational, or digital barriers (underserved groups) is therefore likely to produce solutions that are both accessible and scalable for the wider older adult population. Objective: To describe the co-production process used to develop ENHANCE—a coach-supported digital intervention targeting ten modifiable dementia risk factors in older adults from underserved groups—and report key outputs and lessons learned for equitable digital prevention design. Methods: We co-produced ENHANCE between July 2023 and February 2025 using a multi-stage development process guided by the Medical Research Council framework for complex interventions and the Double Diamond design model. The Person-Based Approach informed user-centred guiding principles (key design objectives), while behaviour change content was operationalised using behavioural change theories. Co-production followed four phases. The Discovery phase explored barriers to engagement with existing digital materials and identified candidate components for each dementia risk-factor module. The Define phase translated these insights into guiding principles and blueprints of each risk-factor module integrated with behavioural change components. The Design phase involved iterative co-production and usability testing of prototypes. The Delivery phase evaluated a high-fidelity prototype through a one-week usability study with coaching support. Contributors included 162 research participants recruited from underserved community settings, 33 patient and public involvement contributors, and 4 human–computer interaction experts. Throughout development, co-production focused on reducing literacy, digital confidence, and cultural barriers to maximise usability across diverse older adult populations. Results: Co-production produced (1) evidence-informed module strategies for targeted dementia risk factors; (2) a set of guiding principles to ensure low-literacy, culturally relevant, and accessible content, supporting both equity of access and wider population usability; (3) a meadow-themed app integrating tailored check-ins, educational videos, cognitive training games, and in-app messaging; and (4) a structured coaching model, including onboarding, brief follow-up, and accompanying coaching manuals. Iterative testing and refinement improved navigation, simplified language, reduced text burden, and ensured the use of familiar and accessible game formats, resulting in a feasibility-ready prototype. Conclusions: : ENHANCE is a co-produced, coach-supported digital intervention designed to be accessible for underserved older adults at increased dementia risk, with design features intended to support accessibility, engagement, and scalability across the wider ageing population. The development process illustrates how integrating co-production with behavioural science and usability methods can support principled intervention design for equitable digital dementia prevention. Clinical Trial: ISRCTN17060879
Background: Medical interview training is a cornerstone of clinical education but faces resource limitations in both implementation and evaluation. While Generative Artificial Intelligence (GAI) offer...
Background: Medical interview training is a cornerstone of clinical education but faces resource limitations in both implementation and evaluation. While Generative Artificial Intelligence (GAI) offers a potential solution for assessment, it remains unclear whether reasoning models improve evaluation validity, particularly within the linguistic context of the Japanese language. Objective: To evaluate the validity of state-of-the-art GAI models in Japanese medical interview training, we assessed scoring patterns and agreement with human clinical educators. Methods: This preliminary comparative study was conducted at a medical university in Japan using text data derived from medical interview training, including both chatbot-based and traditional styles. Postgraduate year 1 and 2 residents were involved. Two blinded human clinical educators independently evaluated the transcripts, reaching a consensus score through discussion. The consensus score was the reference standard. Two GAI models, GPT-5.2 Thinking and Gemini 3.0 Pro, independently evaluated the same transcripts. All evaluations used a standardized 6-domain Objective Structured Clinical Examination rubric (patient care, history taking, physical examination, accuracy and organization of clinical information, clinical reasoning, and management) scored on a 1–6 Likert scale, where 1 is inferior and 6 is excellent. We compared mean evaluation scores using the Wilcoxon signed-rank test and assessed inter-rater reliability using Intraclass Correlation Coefficients (ICCs) between the GAI models and the clinical educators. Results: Clinical educators and both GAI models rated the entire dataset of 40 transcripts by 20 included residents. Clinical educators assigned the highest overall mean scores (5.18, 95% CI 5.06-5.30). Compared to clinical educators, both GAI models demonstrated significant score deflation: GPT-5.2 Thinking assigned the lowest overall score (3.68, 95% CI 3.62-3.72; P<.001), followed by Gemini 3.0 Pro (4.09, 95% CI 3.97-4.21; P<.001). This discrepancy was most pronounced in the management domain, where GPT-5.2 Thinking assigned 2.93 (95% CI 2.79-3.06) compared to the clinical educators' 5.20 (95% CI 4.91-5.49). Agreement between the GAI models and human raters was poor across all domains, with overall ICCs of 0.04 (95% CI 0.00-0.09) for GPT-5.2 Thinking and 0.22 (95% CI 0.10-0.35) for Gemini 3.0 Pro. Conclusions: Unlike previous iterations of GAI, which tended to overestimate student performance, GPT-5.2 Thinking and Gemini 3.0 Pro graded stricter than human experts. Due to significant score discrepancies and poor inter-rater agreement, these models currently lack the validity to serve as standalone summative evaluators for Japanese Objective Structured Clinical Examinations, although their rigorous detection of deficiencies may offer value for formative feedback. Clinical Trial: Trial Registration: UMIN-CTR UMIN000053747; https://center6.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000061336.
Background: Cardiac rehabilitation (CR) improves patient quality of life, morbidity, and mortality. Unfortunately, it is underused by patients. Digital health interventions offer a solution to increas...
Background: Cardiac rehabilitation (CR) improves patient quality of life, morbidity, and mortality. Unfortunately, it is underused by patients. Digital health interventions offer a solution to increase participation in CR. However, patients’ interest in virtual CR, especially among those in the inpatient setting, has not been fully explored. These benefits have been predominantly demonstrated in traditional, center-based CR programs. Objective: The objective of this prospective cross-sectional study was to explore inpatient interest in virtual cardiac rehabilitation among adult patients who were hospitalized with a cardiac rehabilitation-qualifying diagnosis. Methods: A Qualtrics survey, comprised of multiple-choice questions, was administered to cardiac inpatients at the progressive cardiac care unit at Johns Hopkins Hospital from January 2020 to March 2024. Sociodemographic and clinical characteristics were retrieved from the electronic medical record. The study included English-speaking patients over 18 years of age with a diagnosis eligible for CR. Results: A total of 150 patients were included (age 64 ± 13 years, 38% women, and 57% White). With respect to sociodemographic characteristics, 26% of the patients had a high school education or less, 47% were married, 26% were employed full-time, and 63% had private insurance. Participants with greater than high school education were more likely to perceive smartphones as beneficial for leading a healthier lifestyle (48.1% vs. 24.3%, p=0.01) and learning about illnesses (85.7% vs. 54.1%, p<0.001) than participants with a high school education or less. Participants across all sociodemographic factors expressed interest in virtual CR (overall 71.3%), with non-White participants being more interested than White participants (84.6% vs. 61.2%, p=0.002). Conclusions: The majority of cardiac inpatients expressed interest in home-based/virtual CR to alleviate barriers to in-person CR participation. Future work should emphasize digital equity and user support to optimize the widespread adoption of virtual CR.
Background: Mental health difficulties affect nearly one billion people globally. Many of these emerge during youth, making early intervention crucial. Vietnam and Cambodia both have young populations...
Background: Mental health difficulties affect nearly one billion people globally. Many of these emerge during youth, making early intervention crucial. Vietnam and Cambodia both have young populations, recent histories of conflict and ongoing vulnerabilities, including poverty and urban-rural inequality. Although many children and young people (CYP) experience common mental disorders, access to care is limited by stigma, low mental health literacy and reliance on medicalised, urban-centred services. Building the capacity of community-based stakeholders to deliver mental health interventions offers a promising strategy for health systems strengthening in low- and middle-income countries (LMICs). The Mental health capacity Building and stRengthening In Global HealTh systems (M-BRIGHT) study seeks to build capacity for delivery of a youth mental health intervention in Vietnam and Cambodia. Objective: This protocol outlines the intervention phase (phase 3) of the study. This cluster-randomised controlled feasibility trial aims to assess the feasibility, acceptability and potential effects of a co-adapted school-based mental health literacy intervention delivered to adolescents in Cambodia and Vietnam. Methods: Seven secondary schools in each country (five intervention, two control) will participate. We aim to recruit ≥175 adolescents in grades 10-11 (aged around 14-18 years at recruitment) per arm in each country (≥700 adolescents overall), along with one parent/guardian per adolescent and ten trained intervention providers per country. The intervention will be delivered over one school year by providers trained in an earlier phase of the study. The intervention combines indoor psychoeducation sessions and peer-led outdoor activities. A mixed methods approach will be used to assess its feasibility, acceptability, fidelity and potential effects. Quantitative measures will be collected through questionnaires at baseline, endline and three months follow-up, including mental health literacy, mental health, wellbeing and parent-reported behaviour. Qualitative interviews and focus groups with adolescents, parents/guardians and intervention providers will explore intervention acceptability. Feasibility criteria include recruitment ≥85%, retention ≥70%, an average of 70% attendance at sessions and ≥70% of sessions delivered as planned. Results: Recruitment took place from September-December (Vietnam) and in November (Cambodia) 2025. Baseline data collection took place in October (Vietnam) and November (Cambodia) 2025; 746 participants were enrolled at baseline across all sites and arms. The intervention will run until May 2026 (Vietnam) and August 2026 (Cambodia), with final follow-up outcome measures expected to be collected by September 2026 (Vietnam) and December 2026 (Cambodia). Conclusions: This study will assess whether a co-adapted, school-based mental health literacy intervention is feasible and acceptable in Cambodia and Vietnam and will explore its potential effects. Findings may inform a future clinical trial and contribute to the evidence base for youth mental health systems strengthening in LMICs. Clinical Trial: ISRCTN, ISRCTN66038422; https://www.isrctn.com/ISRCTN66038422
Background: Childhood mental health conditions remain a major public health concern, particularly in low-resource environments such as rural districts in South Africa. Disorders such as anxiety, depre...
Background: Childhood mental health conditions remain a major public health concern, particularly in low-resource environments such as rural districts in South Africa. Disorders such as anxiety, depression, Attention-Deficit/Hyperactivity Disorder (ADHD), and autism spectrum disorders are frequently undetected or diagnosed at advanced stages, leading to ineffective management and long-term negative consequences for children’s development and well-being. Objective: This study aims to investigate the factors contributing to the late detection and management of childhood mental health disorders in hospitals within the Umzimvubu Local Municipality, Alfred Nzo District, Eastern Cape Province. Methods: A quantitative, cross-sectional descriptive survey design will be used. All hospitals in Umzimvubu will be included, and a simple random sampling method will be applied manually to select health professionals meeting the study’s inclusion criteria. Online structured questionnaires will be used to collect data. Results: The study protocol has been approved by the University of Venda Higher Degrees Committee and the Research Ethics Committee. Permission from the Eastern Cape Department of Health has been obtained, and site approvals from Alfred Nzo District manager and hospital CEOs are pending. Pretesting and data collection are scheduled to occur in January 2026. Data analysis will be conducted using SPSS version 29. Descriptive statistics and logistic regression will be used to identify factors associated with late detection and management of childhood mental disorders. Results will be presented in tables and graphs. Conclusions: This protocol outlines a study aimed at identifying factors contributing to the late detection and management of childhood mental disorders in hospitals within Umzimvubu Local Municipality. The findings are expected to inform strategies for improving early diagnosis and management, guiding policy development, and strengthening mental health services in rural South Africa.
Background: Post thoracotomy pain remains a major clinical challenge, with substantial impact on pulmonary function, postoperative recovery, and patient quality of life. Thoracic epidural analgesia is...
Background: Post thoracotomy pain remains a major clinical challenge, with substantial impact on pulmonary function, postoperative recovery, and patient quality of life. Thoracic epidural analgesia is widely regarded as the standard of care; however, it is associated with potential complications, including hypotension, urinary retention, and inadequate analgesia in a subset of patients. Intercostal cryoanalgesia, a peripheral nerve block technique that induces temporary axonal degeneration through controlled freezing, has emerged as a potential alternative for prolonged postoperative pain control. Objective: The primary objective of this study is to compare postoperative hospital length of stay between intercostal cryoanalgesia and thoracic epidural analgesia. Secondary objectives include the evaluation of postoperative pain intensity, opioid consumption, adverse effects, postoperative complications, quality of life, quality of recovery, and patient satisfaction. Methods: This is a single-center, prospective, randomized, parallel-group clinical trial comparing intercostal cryoanalgesia with thoracic epidural analgesia for postoperative pain control in patients undergoing thoracic surgery. Fifty adult patients (≥18 years) are randomized 1:1 to either epidural or cryoanalgesia groups. All perioperative and postoperative care is provided by the attending clinical teams according to routine institutional practice, with no influence from the research team beyond randomized allocation. The primary endpoint is postoperative hospital length of stay. Secondary outcomes include pain intensity (visual analogue scale), opioid consumption, incidence of adverse effects and complications, quality of life (WHOQOL-BREF), and quality of recovery (QoR-15). Data are collected up to 1 year postoperatively. Results: Approval from the Human Research Ethics Committee was obtained in November 2024, and participant recruitment began in July 2025. Data collection commenced in September 2026 and is expected to be completed by August 28, 2027. Data analysis will begin in September 2027, with results anticipated in the first quarter of 2028. Conclusions: This study protocol outlines a randomized clinical trial designed to assess clinical outcomes associated with intercostal cryoanalgesia compared with thoracic epidural analgesia following thoracic surgery. The findings are expected to contribute to the evidence base on postoperative pain management and inform the design of future comparative and implementation studies in this field. Clinical Trial: Brazilian Registry of Clinical Trials (ReBEC): identifier RBR-78zfpxd.
Background: As the global population of older adults living with HIV continues to increase, especially in Sub-Saharan Africa (SSA), Canada, the United States, and France, there is a pressing need to c...
Background: As the global population of older adults living with HIV continues to increase, especially in Sub-Saharan Africa (SSA), Canada, the United States, and France, there is a pressing need to comprehend how health systems are addressing the dual challenges of HIV and non-communicable diseases (NCDs) within this demographic. Objective: This scoping review seeks to identify, map, and describe the current evidence regarding tailored and specialized care models for elderly individuals living with both HIV and NCDs in these regions. Methods: Following Arksey and O’Malley’s methodological framework, the research will systematically searches both peer-reviewed on relevant databases such as PubMed, Embase, Web of Science, CINAHL, MEDLINE, Scopus, Global health and other relevant databases and grey literature search on databases such as EMBASE Conference Abstracts, Conference Proceedings Citation Index—Science and Social Science & Humanities to encompass the range of available care model, including Chronic care model, integrated, collaborative service delivery, geriatric HIV models, and multidisciplinary approaches. The selection process will involve two stages: two independent reviewers will initially screen titles for eligibility and a full- text review of the selected articles. A specially designed tool will be used for data extraction, focusing on minimising bias and accurately capturing study details. The final selection of studies will be analysed using a standardised tool to comprehensively assess all bibliographic information and study characteristics. Results: The planned study dates for the review will be August to October 2025. No ethical approval is required as the review will draw on publicly available publications and materials. The study’s conclusions will be subject to peer review and published in a scientific journal, with the abstract shared at local and international conferences. Conclusions: Key findings will be disseminated to health ministries, community- based organisations and policymakers to inform policy decisions regarding implementation of tailored specialised care for elderly population living with both HIV and comorbidity of NCDs.
Background: Fatigue is a common debilitating symptom of breast cancer, and its treatment may result in significant symptom burden and affect adherence to treatment. Graded Exercise Therapy (GET) and C...
Background: Fatigue is a common debilitating symptom of breast cancer, and its treatment may result in significant symptom burden and affect adherence to treatment. Graded Exercise Therapy (GET) and Cognitive Behavioral Therapy (CBT) have separately been shown in previous studies to be beneficial for the management of cancer-related fatigue (CRF). Objective: This study aims to assess the feasibility, acceptability and potential efficacy of combining GET and CBT for treatment of fatigue in breast cancer patients on treatment in Singapore. Methods: In this randomized controlled pilot study, a total of 100 female breast cancer patients, with self-reported rating of at least moderate fatigue (One-item fatigue scale score ≥4) will be recruited and randomized in a 1:1 ratio to undergo a combination of GET and CBT versus GET alone (standard of care). This will include a primary cohort of 90 patients with Stage I to III breast cancer who have completed surgery and adjuvant chemotherapy (if indicated), and an exploratory cohort of 10 patients with Stage IV breast cancer undergoing systemic therapy. Acceptability is measured using client satisfaction questionnaire including items on cultural sensitivity. Feasibility is measured by participant uptake, adherence to sessions and willingness to pay for therapy sessions. Efficacy is assessed based on quantitative measures of fatigue, quality of life, and physical and functional outcomes. Results: The recruitment of participants commenced on 14 July 2025 and is projected to be completed by 31 July 2026. Potential extension to this project would be the subsequent expansion of the current exploratory cohort of patients with metastatic breast cancer. Conclusions: The present study compares the use of a combination of GET and CBT against GET alone for management of fatigue in breast cancer survivors, applied to the Singaporean context. The primary aim is to establish feasibility and acceptability of GET and CBT interventions in the local context, with a secondary aim of evaluating efficacy in terms of fatigue, quality of life and functional outcomes. Clinical Trial: ClinicalTrials.gov ID: NCT07116161
Background: Existing research on the accuracy of self-assessment (SA) in health professions (HP) has shown poor accuracy of SA compared to external assessors. Objective: We systematically reviewed the...
Background: Existing research on the accuracy of self-assessment (SA) in health professions (HP) has shown poor accuracy of SA compared to external assessors. Objective: We systematically reviewed the evidence for educational interventions aimed at improving the accuracy of SA for technical (procedural) and non-technical (critical thinking, decision making and knowledge) Methods: We conducted this systematic review according to the PRISMA guidelines using Medline, Cochrane Library, Embase, CINAHL, AMED, ERIC, Education Source, Web of Science and Scopus databases. We included studies in English that reported on educational interventions aimed at improving the accuracy of SA versus external assessment across all health professions. A narrative synthesis of the extracted data was used using a convergent integrated approach, which reported both quantitative and qualitative data. We used the modified Medical Education Research Study Quality Instrument (MMERSQI) as the critical appraisal and bias tool to evaluate the methodological quality of included studies. Results: After abstract and full text screening of 7439 studies, we included 35 studies and 3127 participants, the majority of which were of good methodological quality. Twenty-four studies explored SA of non-technical competencies, while 11 studies explored SA of technical competencies. Health professions included medicine (n=16), dentistry (n=9), pharmacy (n=4), nursing (n=2), physiotherapy (n=2), midwifery (n=1) and occupational therapy (n=1). The accuracy of SA was improved with the use of self-assessment rubrics (11 out of 14 studies), video review for feedback (5 out of 12 studies), verbal feedback (2 of 2 studies), electronic portfolios (2 of 2 studies), simulation (2 of 2 studies), and coaching (1 of 1 study). The use of internet-based applications (1 of 1 study) and didactic learning (1 of 1 study) did not improve the accuracy of SA. Conclusions: The accuracy of self-assessment can be improved by using SA rubrics, video and verbal feedback, simulation, electronic portfolios and coaching. Limitations include a clear definition of self-assessment across research studies resulting in exclusion of systematic review. This information can be used by educators to improve the accuracy of SA within health professions education. Clinical Trial: PROSPERO (CRD42024586510)
Background: Clinical natural language processing (NLP) refers to computational methods for extracting, processing, and analyzing unstructured clinical text data, and holds a huge potential to transfor...
Background: Clinical natural language processing (NLP) refers to computational methods for extracting, processing, and analyzing unstructured clinical text data, and holds a huge potential to transform healthcare. The advancement of deep learning, augmented by the recent emergence of transformers, has been pivotal to the success of NLP across various domains. This success is largely attributed to the end-to-end training capabilities of deep learning systems. Further, advances in instruction tuning have enabled Large Language Models (LLMs) like OpenAI’s GPT to perform tasks described in natural language. While these advancements have dramatically improved capabilities in processing languages like English, these benefits are not always equally transferable to under-resourced languages. In this regard, this review aims to provide a comprehensive assessment of the state-of-the-art NLP methods for the mainland Scandinavian clinical text, thereby providing an insightful overview of the landscape for clinical NLP within the region. Objective: The study aims to perform a systematic review to comprehensively assess and analyze the state-of-the-art NLP methods for the Scandinavian clinical domain, thereby providing an overview of the landscape for clinical language processing within the Scandinavian languages across Norway, Denmark, and Sweden. Generally, the review aims to provide a practical outline of various modeling options, opportunities, and challenges or limitations, thereby providing a clear overview of existing methodologies and potential avenues for future research and development. Methods: A literature search was conducted in various online databases, including PubMed, ScienceDirect, Google Scholar, ACM Digital Library, and IEEE Xplore between December 2022 and March 2024. The search considers peer-reviewed journal articles, preprints, and conference proceedings. Relevant articles were initially identified by scanning titles, abstracts, and keywords, which served as a preliminary filter in conjunction with inclusion and exclusion criteria, and were further screened through a full-text eligibility assessment. Data was extracted according to predefined categories, established from prior studies and further refined through brainstorming sessions among the authors. Results: The initial search yielded 217 articles. The full-text eligibility assessment was independently carried out by five of the authors and resulted in 118 studies, which were critically analyzed. Any disagreements among the authors were resolved through discussion. Out of the 118 articles, 17.9% (n=21) focus on Norwegian clinical text, 61% (n=72) on Swedish, 13.5% (n=16) on Danish, and 7.6% (n=9) focus on more than one language. Generally, the review identified positive developments across the region despite some observable gaps and disparities between the languages. There are substantial disparities in the level of adoption of transformer-based models. In essential tasks such as de-identification, there is significantly less research activity focusing on Norwegian and Danish compared to Swedish text. Further, the review identified a low level of sharing resources such as data, experimentation code, pre-trained models, and the rate of adaptation and transfer learning in the region. Conclusions: The review presented a comprehensive assessment of the state-of-the-art Clinical NLP in mainland Scandinavian languages and shed light on potential barriers and challenges. The review identified a lack of shared resources, e.g., datasets and pre-trained models, inadequate research infrastructure, and insufficient collaboration as the most significant barriers that require careful consideration in future research endeavors. The review highlights the need for future research in resource development, core NLP tasks, and de-identification. Generally, we foresee that the findings presented will help shape future research directions by shedding some light on areas that require further attention for the rapid advancement of the field in the region
Background: Adolescent anxiety is a growing public health concern and is associated with significant academic, social, and emotional impairment. Mindfulness-based interventions (MBIs) have shown promi...
Background: Adolescent anxiety is a growing public health concern and is associated with significant academic, social, and emotional impairment. Mindfulness-based interventions (MBIs) have shown promise in reducing anxiety and improving well-being; however, engagement and acceptability remain challenges. Virtual reality (VR)–based delivery may enhance immersion and attention, potentially addressing barriers associated with traditional mindfulness formats. To date, evidence on VR-based mindfulness interventions for adolescents, particularly in Hong Kong, remains limited. Objective: This study aimed to evaluate the feasibility and acceptability of a virtual reality mindfulness-based intervention (VR-MBI) delivered via a Cave system for adolescents with mild-to-moderate anxiety symptoms in Hong Kong. Secondary aims were to explore preliminary effects on psychological outcomes and physiological stress regulation, and to identify facilitators and barriers influencing engagement. Methods: A mixed-methods single group pre–post study was conducted with adolescents experiencing mild-to-moderate anxiety symptoms, recruited from secondary schools and youth service organizations in Hong Kong. Participants completed an 8-week group-based VR-MBI program. Feasibility and acceptability were assessed using recruitment, attendance, retention, homework practice frequency, dropouts, and adverse events. Psychological outcomes were measured using the Depression Anxiety Stress Scale–21 (DASS-21) and the Mindful Attention Awareness Scale (MAAS). Heart rate variability (HRV) indices (SDNN, RMSSD) were collected at baseline and post-intervention using a wearable device. Post-intervention focus group interviews explored participants’ experiences. Results: A total of 42 participants were enrolled and completed both baseline and post-intervention assessments. Attendance was high, with 73.8% of participants attending at least 80% of sessions, and participants engaged in regular homework practice. No dropouts or adverse events were reported. Quantitative analyses showed no significant pre–post changes in self-reported anxiety, depression, stress, or mindfulness. However, significant improvements were observed in HRV indices, indicating enhanced physiological stress regulation. Qualitative findings suggested perceived benefits in emotional regulation, stress reduction, focus, and sleep, with the immersive CAVE environment and group-based format identified as key facilitators of engagement. Conclusions: The CAVE-based VR-MBI was feasible and acceptable for adolescents with mild to moderate anxiety symptoms in Hong Kong. Although self-reported psychological outcomes did not show significant change, improvements in physiological indicators of stress regulation and positive qualitative feedback suggest early benefits not fully captured by self-report measures. These findings support further investigation of VR-delivered mindfulness interventions using controlled study designs and longer follow-up periods. Clinical Trial: n/a
Background: The CAPABLE (Cancer Patients Better Life Experience) project developed an application for remote monitoring and management of treatment-related symptoms, as well as for delivering a set of...
Background: The CAPABLE (Cancer Patients Better Life Experience) project developed an application for remote monitoring and management of treatment-related symptoms, as well as for delivering a set of supplementary nonpharmacological interventions, with the aim of improving patients’ quality of life. Clinical studies were conducted to evaluate the effectiveness of CAPABLE, yielding encouraging results. However, these studies did not explore individual patients’ perspectives. Objective: Following the evaluation of the CAPABLE intervention’s efficacy, this study aims at exploring end users’ overall experience with the telemonitoring system, identifying strengths and weaknesses in relation to users’ needs and expectations, in order to inform future developments. Methods: Toward the end of the clinical study, a focus group was conducted with a subset of enrolled patients. The discussion was led by a psycho-oncologist using a predefined framework of topic-related questions, which served as prompts to encourage open discussion. Patients freely shared their experiences, and a thematic analysis was performed on the collected statements. Results: The findings showed that the tool primarily served a dual function of support and reassurance. Patients reported psychological relief and a sense of security, driven by the perception of being closely monitored and supported by a multidisciplinary hospital team. CAPABLE was perceived as easy to use, effective, and useful. Nevertheless, several weaknesses also emerged. Suggestions for improvement focused on a closer alignment between CAPABLE functionalities and patients’ individual treatments and preferences, as well as concerns regarding application maintenance after the end of the project. Conclusions: The focus group provided valuable insights to inform the future development of telemonitoring applications for cancer patients.
Incorporating culturally relevant music can enhance awareness and control in hypertension management and stroke preparedness. The Music4Health initiative created music-driven campaigns focused on yout...
Incorporating culturally relevant music can enhance awareness and control in hypertension management and stroke preparedness. The Music4Health initiative created music-driven campaigns focused on youth and their caregivers. We outlined the components of songs developed through community participation to raise awareness about hypertension and stroke preparedness.
The project was conducted in three phases: an open call, a designathon, and a bootcamp. From October 2023 to July 2024, a crowdsourcing open call was launched online and in person. Teams and individuals submitted ideas for creatively disseminating evidence-based prevention strategies for hypertension and stroke through music. Fifteen participants were invited to a 3-day designathon to refine their songs with expert mentors. The final phase, a bootcamp, involved community assessment and intensive workshops with the top six teams to develop and record complete songs with experts and producers. The lyrics from the bootcamp were analyzed using rapid thematic analysis guided by the PEN-3 cultural model, focusing on Relationships and Expectations and Cultural Empowerment domains.
Thematic analysis of the seven finalist songs from the bootcamp identified themes using two PEN-3 model domains. The Relationship and Expectations domains included perceptions of hypertension severity, myths about hypertension (like the role of “juju”), and the necessity for healthy coping strategies. Enablers focused on the availability of hypertension prevention strategies, such as healthy diets, stress management, and avoidance of smoking. Nurturers emphasized raising awareness about hypertension among families, adopting healthy practices for loved ones, and the role of peers in promoting healthy habits. Unique cultural aspects included using Afrobeat and Fuji beats, pidgin English, and references to spirituality in adopting health practices.
Culturally centered music may be an appealing channel for promoting the uptake of evidence-based health interventions. This study highlights the feasibility of using participatory approaches to co-create health dissemination strategies, leveraging music's cultural relevance and appeal to engage youth and their caregivers in hypertension and stroke prevention.
Background: Digital parenting programs offer a scalable solution to improve early childhood development outcomes, especially in low- and middle-income countries like China, but face challenges in sust...
Background: Digital parenting programs offer a scalable solution to improve early childhood development outcomes, especially in low- and middle-income countries like China, but face challenges in sustaining user acceptability and engagement. The culturally specific factors that shape these processes are also not well understood. Objective: This study explored the lived experiences of caregivers and facilitators in a digital-human parenting program delivered within the preschool systems in a lower-middle-income city in China, with a particular focus on the determinants of acceptability, the facilitators and barriers to engagement, and the drivers of perceived changes. Methods: Embedded within a cluster randomized controlled trial in urban China, this qualitative study used semi-structured interviews and focus group discussions with 26 caregivers and 18 program facilitators. Data were analyzed using a thematic approach. Results: Findings demonstrated a virtuous cycle where acceptability (driven by content relevance and digital usability) fostered engagement, leading to perceived changes that reinforced the cycle. Engagement was shaped by intrinsic and extrinsic motivators. Cultural factors were critical: mismatched expectations from the blurred concepts of “parenting” and “education” hindered acceptance, and a "shame culture" inhibited open discussion. An anonymous “Tree-hole” feedback system emerged as a key culturally sensitive solution. Conclusions: The effectiveness of digital parenting interventions in collectivist contexts requires deep cultural adaptation. Interventions must move beyond one-size-fits-all models to incorporate user-centered design and culturally resonant features, such as anonymous feedback systems. A hybrid, family-centered model leveraging trusted human figures is essential for building trust and maximizing impact. Clinical Trial: ChiCTR2400081911
Background: Chronic obstructive pulmonary disease (COPD), emphysema, bronchiectasis, and cor pulmonale are chronic lung diseases (CLD) that pose a global public health challenge. However, there remain...
Background: Chronic obstructive pulmonary disease (COPD), emphysema, bronchiectasis, and cor pulmonale are chronic lung diseases (CLD) that pose a global public health challenge. However, there remains a lack of accurate assessment and predictive indicators. The triglyceride-glucose (TyG) index serves as a reliable indicator of insulin resistance (IR). IR is associated with an increased incidence, prevalence, or severity of CLD. Objective: This study aims to investigate the relationship between the TyG index and the risk of CLD, as well as to assess the predictive role of the TyG index in CLD. Methods: Based on data collected from the China Health and Aging Longitudinal Study (CHARLS) from 2011 to 2020, a total of 3,776 research subjects were included for data analysis. K-means clustering analysis was employed to categorize the subjects into three groups. The Kaplan-Meier curve was used to compare the survival rates of CLD events among the groups. Multivariate Cox proportional hazards regression analysis was conducted to examine the relationship between the TyG index and CLD events across the groups. A restricted cubic splines (RCS) regression model was utilized to explore potential linear associations between the TyG index and CLD events. The Receiver Operating Characteristic Curve (ROC) was used to evaluate the predictive value of the TyG index for CLD events. Results: During the follow-up period from 2013 to 2020, 940 subjects were diagnosed with CLD. Based on baseline characteristics, the K-means clustering analysis identified three groups of subjects. The Kaplan-Meier curve indicated statistically significant survival differences among the groups (p=0.0064). After a follow-up period exceeding 50 months, Group 1 exhibited the fastest decline and the lowest rate of disease-free survival. Multivariate Cox proportional hazards analysis revealed that in the unadjusted model, the TyG index of Group 1 was significantly associated with CLD events (HR, 1.58 [95% CI 1.18-2.13], p<0.05). This association remained significant in models adjusted for demographic factors (HR, 1.61 [95% CI 1.18-2.20], p<0.05) and in models adjusted for both demographic factors and disease status (HR, 1.64 [95% CI 1.19-2.26], p<0.05). Similarly, the TyG index in Group 3 showed a significant association with CLD events in both the unadjusted (HR, 1.62 [95% CI 1.12-2.32], p<0.05) and adjusted models (HR, 1.66 [95% CI 1.15-2.39], p<0.05; HR, 1.66 [95% CI 1.14-2.41], p<0.05). RCS curves demonstrated a positive association between the TyG index and CLD events in Groups 1 and 3. ROC curves indicated that the predictive value of the TyG index for CLD events was limited (AUC=0.511-0.548). Conclusions: Research indicates a positive association between the TyG index and CLD in specific populations, although it is not an independent predictor. The calculation and monitoring of the TyG index can aid in risk stratification and the development of intervention strategies for populations.
Background: Mental health conditions (MHC), in particular depression and anxiety are the leading contributors to youth disability globally. In the United States, there has been a steep increase in dia...
Background: Mental health conditions (MHC), in particular depression and anxiety are the leading contributors to youth disability globally. In the United States, there has been a steep increase in diagnosed MHC cases over the last decade. Adolescents in rural areas are often disproportionately affected due to a combination of limited access to mental health professionals, and stigma around seeking care. Untreated depression and anxiety can lead to an increased risk of substance use, academic struggles, and delinquency, making early intervention key to preventing such negative outcomes. Current treatment options, mainly psychotherapy and psychopharmacology, have shown modest effects. Prior research suggests associations between slow paced deep breathing and autonomic function, cerebral perfusion, and stress regulation, rendering structured breathing an under-utilized tool for MHC management. Objective: This project addresses two key priorities: reducing health disparities and enhancing population and value-based care in rural communities. This project is grounded in an equity-oriented approach to serving diverse and underserved youth populations by utilizing structured deep breathing, as an accessible and low-cost intervention. Methods: This study assesses the feasibility of collecting functional near-infrared spectroscopy (fNIRS) and data in the full sample and magnetic resonance imaging (MRI) in a subsample of 20 adolescents, without prespecifying neurobiological efficacy hypotheses. We aim to recruit approximately 40 adolescent patients receiving care through the Mayo Clinic Health System (MCHS) from rural communities in northwestern Wisconsin and southeastern Minnesota. All primary and secondary outcomes will be summarized using descriptive statistics, including means, standard deviations, medians, proportions, and 95% confidence intervals, as appropriate. Because this is a pilot feasibility study, the analytic focus is on estimation, variability, and data completeness, rather than hypothesis testing or formal statistical inference. Results: This study focuses on generating feasibility metrics and descriptive summaries of physiological and psychological data to inform future trial design. Conclusions: Adolescents with anxiety and depression are a particularly vulnerable group, often undertreated due to limited access to mental health care. The proposed breathing intervention offers an accessible and scalable tool that integrates multimodal brain physiology measures in rural youth populations.
Background: Artificial intelligence (AI) is rapidly integrating into health professions education (HPE) and clinical practice, creating significant opportunities alongside new ethical challenges. Alth...
Background: Artificial intelligence (AI) is rapidly integrating into health professions education (HPE) and clinical practice, creating significant opportunities alongside new ethical challenges. Although current international and professional guidance establishes essential values, it offers limited direction for how clinicians, educators, learners, and institutions should act in routine educational, research, and clinical contexts. The CARE-AI (Contextual, Accountable and Responsible Ethics for AI) project responds to this practice-level gap by articulating guidance that moves beyond values toward professional accountability and equity, with explicit attention to educational and clinical practice contexts. Objective: Our study objective was to develop and validate a consensus-based, actionable framework of principles to guide responsible AI use across health professions education, research, and clinical care. Methods: We conducted a three-phase modified Delphi consensus study, reported in accordance with the Accurate Consensus Reporting Document (ACCORD). Phase I involved two international professional meetings and three purposively sampled focus groups (AI/technology, HPE, ethics/professionalism) to adapt and refine draft principles using an exploratory qualitative approach. Phase II employed an online survey with a 5-point importance scale and prespecified consensus criteria (inclusion ≥70% high ratings; exclusion ≥70% low ratings). Phase III used include/exclude/undecided voting on revised principles. Quantitative thresholds determined consensus. Qualitative free-text comments informed iterative refinement. Results: Participants represented diverse communities of practice across health professions education, clinical care, ethics, and digital health, spanning multiple professional roles and training levels. Across all phases, 303 unique participants contributed to the study. Phase I focus groups (n=61) provided early insight and direction. In Phase II, Delphi survey round 1, 242 participants initiated the survey, with 120 completing it (49.6%). In Phase III, Delphi survey round 2, 103 participants were invited based on expressed interest at the end of Round 1; 78 initiated the survey and 75 completed it (96.2% of starters). In Phase II, 58 of 61 statements (95%) met inclusion, and participants submitted 1,887 comments (697 were content-rich), prompting clearer accountability language, stronger equity commitments, and more usable wording. In Phase III, all nine principles and their statements met inclusion. Participants contributed 224 comments (179 were content-rich) that informed final refinements. Endorsement was near-unanimous: 96% agreed or strongly agreed that the framework clearly defined professionalism expectations for AI to meet educational, technological, and ethical needs in the health professions. Conclusions: The Health CARE-AI Framework, with its preamble and nine principles, articulates actionable, consensus-validated guidance that moves from values to competence, into professional accountability, and toward structural commitments to equity. Paired with a companion implementation guide and toolkit, the framework is intended to support use across education, research, and clinical settings. Clinical Trial: Not applicable
Background: Loneliness is a prevalent and growing concern across the United Kingdom. While numerous validated scales exist to quantify the severity and prevalence of loneliness experiences across popu...
Background: Loneliness is a prevalent and growing concern across the United Kingdom. While numerous validated scales exist to quantify the severity and prevalence of loneliness experiences across populations (the University of California, Los Angeles Loneliness Scale and the DeJong Gierveld Loneliness Scale) (de Jong-Gierveld, 1987; Russell, 1996), there remains a gap in understanding how loneliness manifests and is addressed within therapeutic practice. Given the associated stigma and shame surrounding loneliness self-disclosure, practitioner perspectives offer crucial insights into how clients express loneliness concerns within digital therapeutic environments. Objective: The objectives of this study are to gather the practitioners' perspective of loneliness within a digital therapeutic context, and are defined as follows:
1. To understand how practitioners identify loneliness concerns
2. To identify how loneliness is elicited in digital mental health interventions
3. To identify co-occurring themes (such as grief, shame, and social disconnection) that signal loneliness concerns in client communications within digital therapeutic environments Methods: Semi-structured interviews were conducted with nine experienced practitioners (minimum one year of practice). Participants included specialists in grief counselling, LGBTQ+ support and digital mental health platform therapists. Interview transcripts were analysed using Braun and Clarke's six-phase thematic analysis approach, employing an inductive, data-driven methodology to allow themes to emerge from participant accounts rather than fitting data to pre-existing theoretical frameworks. Results: Four interconnected themes were identified: 1. Conceptualising Loneliness: practitioners distinguished between social contact and meaningful connection, identifying the experience of being “lonely in the crowd” where clients feel disconnected despite having social networks; 2. Contextual Causes: loneliness emerges from life transitions (university, grief, relationships change), stigmatised identities and cultural minorities (LGBTQ+, neurodiversity), and resource reduction (youth services closures and social support); 3. Expressions and Language: specifically that clients rarely expressed loneliness directly, instead using terms like “depressed” or “misunderstood”, with disclosure patterns varying by age and stigma experience; 4. Mental Health Co-occurrence: severe mental health conditions created bidirectional cycles where loneliness exacerbated symptoms, while mental health difficulties increased social isolation. Practitioners reported that 80-90% of their clients experienced loneliness concerns, yet direct disclosure was virtually absent across all participants' experiences. Conclusions: Practitioners identified multiple stigmatising experiences as contextual drivers of loneliness, highlighting how loneliness emerges not only from individual factors but from broader patterns of social exclusion and marginalisation. For therapeutic practice, these insights suggest that practitioners can use awareness of stigmatising experiences as potential indicators when assessing loneliness risk. The presence of these contextual patterns were consistent across digital practitioners’ experiences, providing a foundation to develop more targeted interventions that address both the emotional experience of loneliness and its underlying social drivers across therapeutic environments.
Background: Abstract
Digital transformation in healthcare, including electronic health records, telemedicine, data analytics, and mobile health applications, is reshaping service delivery and patient...
Background: Abstract
Digital transformation in healthcare, including electronic health records, telemedicine, data analytics, and mobile health applications, is reshaping service delivery and patient experience. However, evidence on how these technologies influence e-healthcare service quality within developing countries remains limited. This study aimed to examine the impact of digital transformation on e-healthcare service quality through the mediating role of clinical process change. A quantitative, cross-sectional survey design was conducted among healthcare users in Alexandria Egypt private sector data. Data were collected using validated instruments addressing electronic health services, telemedicine, data analytics, and mobile applications, with physician–patient communication. Responses were analyzed to assess perceptions of accessibility, security, usability, and service quality. Findings showed a predominance of neutral attitudes toward digital health technologies. Nearly half of respondents (45%) were neutral about accessibility, and only 32% strongly agreed that records were secure. Neutrality was also common regarding data analytics (33.8% awareness, 38.0% quality of care, 32.8% decision-making) and mobile applications (36.8% user-friendliness, 34.3% wait time reduction, 38.5% technical reliability). Communication indicators showed moderate ratings, with neutrality prevailing for physician listening (34.0%) and patient comfort (32.3%). Despite neutrality, around one-third agreed on the convenience of telemedicine and clarity of information provided (45.8%). The study demonstrates that digital transformation, mediated partly through clinical process change, enhances clinical workflows and perceived e-healthcare service quality. However, widespread neutrality indicates knowledge gaps, highlighting the need for user-centered design, digital literacy training, and improved communication to maximize the benefits of healthcare digitalization.
Keywords: Digital transformation; E-healthcare service quality; Clinical process change; Data analytics; Telemedicine. Objective: The study aims to achieve the following objectives:
1. To examine the scope and evolution of digital transformation in healthcare systems.
2. To identify the key enablers of successful digital transformation, including technological infrastructure and leadership.
3. To explore the major barriers to digital health implementation.
4. To assess the impact of DT on healthcare delivery, patient outcomes, and provider experience.
5. To develop a conceptual framework to guide future digital transformation efforts. Methods: 6.1 Research Design
A quantitative, cross-sectional survey design was employed to examine the relationships between digital transformation (DT), clinical process change (CPC), and healthcare e-service quality in private hospitals in Egypt. Structural Equation Modeling (SEM) using Partial Least Squares (PLS-SEM) was used to test the hypothesised mediation model.
________________________________________
6.2 Target Population and Sampling Frame
The target population consisted of patients who received services from private hospitals in Egypt during the data collection period. Staff members or clinical professionals were not included in the sample to maintain conceptual consistency, because the dependent variable—e-service quality—is evaluated by patients, not employees.
The sampling frame covered adult patients (≥18 years old) who visited outpatient departments, emergency units, or utilised digital channels (e.g., mobile apps, portals) during the study period.
________________________________________
6.3 Sampling Strategy and Justification
A convenience sampling approach was used due to practical constraints, including variable patient flow across hospitals and restricted access to patient records. Although probability sampling is ideal, convenience sampling is widely acceptable in healthcare service quality research when direct access to sampling lists is not feasible.
To mitigate limitations, recruitment occurred across multiple hospitals, different days of the week, and various service units to improve representativeness. Results: 7. Results
This study presented and analyzed the empirical findings of the study investigating the impact of Digital Transformation through the dimensions of E-health Records, Telemedicine Services, Data Analytics, and Mobile Applications on E-healthcare Service Quality within Egypt’s private healthcare sector, with Clinical Process Change acting as a mediating variable.
The descriptive analysis offered a clear understanding of the respondent demographics, suggesting a sample of digitally literate and experienced users.
The results revealed generally positive attitudes toward digital healthcare services, especially in areas related to telemedicine convenience, mobile app functionality, and perceived security.
Using Structural Equation Modelling (SEM), the study validated a strong model fit and confirmed the reliability and validity of the measurement constructs. The analysis demonstrated that Digital Transformation has a significant positive impact on both Clinical Process Change and E-healthcare Service Quality.
Furthermore, the results established that Clinical Process Change partially mediates the relationship between Digital Transformation and E-healthcare Service Quality (H4), reinforcing the importance of internal operational improvements in realizing the benefits of digital initiatives.
Overall, the findings confirm that successful digital transformation initiatives in healthcare not only require technological implementation but must be accompanied by clinical process enhancements to achieve higher service quality. These results have significant implications for healthcare decision-makers, emphasizing the need to invest in integrated digital and process change strategies to improve patient outcomes and service delivery in the digital age.
Figure 4 shows the measurement model which consists of 11 latent variables, namely, E-health records, Telemedicine services, Data analytics, Mobile App, Physician-Patient Interaction, Information Accessibility, Security, Responsiveness, Reliability, Ease of use and Loyalty. Conclusions: Conclusion
Our empirical results resonate strongly with the broader scholarly literature: digital transformation including E health records, telemedicine, data analytics, and mobile apps significantly enhances both clinical processes and perceived e healthcare service quality. The partial mediation through clinical process change further corroborates system-level frameworks and empirical studies describing how digital tools translate into quality improvements when embedded in improved clinical workflows. These results provide solid academic validation and practical guidance for implementing digital innovation in healthcare.
Background: Pediatric survivors of critical illness often face persistent psychosocial challenges after PICU (Pediatric Intensive Care Unit) discharge, but follow-up support across hospital, home, com...
Background: Pediatric survivors of critical illness often face persistent psychosocial challenges after PICU (Pediatric Intensive Care Unit) discharge, but follow-up support across hospital, home, community, and school settings remains inconsistent. Digital interventions could help bridge these gaps and support recovery. Objective: To systematically review the literature on digital psychosocial follow-up solutions for children who survived critical illness, describing target populations, intervention design, evaluation methods, and psychosocial effects. Methods: A systematic literature review was performed using the Scopus database, supplemented by backward citation searches and hand searches of related reviews. Eligible studies included children surviving medical conditions potentially requiring PICU care, implemented a digital intervention (excluding telephone-only), and evaluated psychological or social outcomes; studies published before 2010, in non-English languages, not peer-reviewed, lacking full text, not original research, involving mixed child-adult populations, or with unspecified participant age or diagnosis were excluded. The quality of the included studies was appraised with the MMAT (Mixed Methods Appraisal Tool) 2018. Owing to heterogeneity in populations, interventions, comparisons, outcomes, and study designs, a narrative synthesis was applied. Results: Thirty-three publications reporting on 31 unique studies (N=1,717 participants, ages 0–17) were included. The studies spanned North America, Europe, and Asia and were conducted in inpatient, outpatient, home, and school contexts. Interventions comprised web applications (n=9/31), mobile apps (n=7/31), social robots (n=6/31), video games (n=4/31), and mixed modalities (n=5/31). Many studies (n=18/31) engaged guardians as co-participants or co-developers along with children. Target conditions were predominantly cancer (n=11/31), type 1 diabetes (n=8/31), and asthma (n=7/31). Mixed methods designs were most common (n=11/31), followed by nonrandomized quantitative trials (n=7/31) and randomized controlled trials (n=6/31). Most studies reported positive psychosocial effects. Across outcomes, self-management (n=3/31) and quality of life (n=5/31) showed the most statistically significant (P<.05) benefits. Evidence for psychosocial outcomes was less consistent. The certainty of evidence was limited by a single-database search, single-reviewer screening, variable methodological quality, and heterogeneity. Conclusions: Digital psychosocial follow-up for childhood critical illness survivors appears feasible and promising, particularly for self-management and quality of life, but the evidence base is heterogeneous and methodologically constrained. To strengthen clinical translation, future work should prioritize rigorous trials, standardized and theory-informed pediatric psychosocial outcome sets, longer follow-up, transparent reporting, and equity-focused designs that integrate family-centered hybrid clinic-home pathways and, where feasible, predictive features. Clinical Trial: PROSPERO CRD42022364703; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022364703
Background: The Ready-Made Garments (RMG) industry is a vital part of Bangladesh's economy and employing over 4 million workers from low-income backgrounds that generally neglects their healthcare asp...
Background: The Ready-Made Garments (RMG) industry is a vital part of Bangladesh's economy and employing over 4 million workers from low-income backgrounds that generally neglects their healthcare aspects. Historically, the sector is criticized for labor exploitation, unsafe working conditions, and rights violations since there has been massive loss of lives over accidents. While compliant factories adhere to better labor standards, many non-compliant factories expose workers to poor conditions, increasing their health risks. The COVID-19 pandemic exacerbated vulnerabilities within this workforce, resulting in widespread factory closures and massive job losses, heightened health risks, and leaving millions of workers without wages. Although the government provided some relief, it lacked policies for job security, social protection, health services, and emergency relief. Although technology has played a critical role in crisis response and healthcare, access to these technologies remains limited for them due to digital literacy gaps. Many RMG workers primarily use basic mobile phones for communication, not for accessing health or emergency services. Therefore, there is a need to develop a sustainable system that leverages their existing technological familiarity to ensure their voices are heard. Objective: Our aim was to gain a deeper understanding of RMG workers' experiences based on their existing work environments and interactions with technology, healthcare management, and the impact of COVID-19 on their circumstances. By understanding these aspects, we can recommend a technology-based framework design that serves as a sustainable and contextual model. Methods: We conducted in-person interviews with 55 RMG workers, comprising 32 female and 23 male participants from urban and suburban areas of Dhaka and suburban Gazipur, in Phase 1, before the pandemic. The participants were aged between 18-40. We reconnected with 12 participants from Phase 1 during the pandemic in Phase 2, in addition to three stakeholders from RMG factories via one-on-one phone conversations. Each interview was conducted in Bengali, and we obtained consent to record the audio. Overall, 846 minutes of discussion were translated and transcribed. The results were analyzed using thematic analysis. Results: We found insights into the working conditions, personal experiences, perceptions of healthcare, lifestyle choices, and technology use, all of which differed based on the type of factory which is yet not discussed together. Those employed at compliant factories enjoyed better healthcare support and utilized technology more effectively compared to their counterparts in non-compliant factories. Due to the pandemic, the situation for all workers changed dramatically, regardless of factory compliance, leading to major impacts on their daily lives, heightened health and safety worries, and a lack of emergency assistance. The RMG sector encountered a lot of challenges, underscoring the pressing need for targeted emergency relief and healthcare services for these workers. Conclusions: This research examined the workplace and daily lives of RMG workers, focusing on their challenges, healthcare perspectives, and technology use during the pandemic. Based on the findings, we proposed a technology-based framework design called VOICE, which connects workers to service providers through a straightforward interface. This would help reach marginalized communities during emergencies and provide essential support to improve their well-being.
Background: The fragmentation of electronic health records (EHRs) is a major barrier to integrated cancer care, negatively impacting diagnostic efficiency and treatment continuity. Blockchain technolo...
Background: The fragmentation of electronic health records (EHRs) is a major barrier to integrated cancer care, negatively impacting diagnostic efficiency and treatment continuity. Blockchain technology has emerged as a promising solution for secure health data sharing, with the potential to enhance interoperability, data governance, and traceability in complex clinical settings like oncology. However, the successful implementation of such technology is contingent upon patient acceptance and trust, which remain underexplored. Objective: This study aimed to investigate the perceptions of oncology patients regarding the use and control of their digital health data. We specifically assessed their willingness to share information, their level of trust in different stakeholders within the healthcare ecosystem, and the conditions under which they would find blockchain-based solutions acceptable. Methods: We conducted a cross-sectional, exploratory, quantitative study with 110 oncology patients at Hospital Santa Izabel in Salvador, Brazil. A structured questionnaire, validated by experts for clarity and relevance, was used. Data collection was managed via the REDCap platform. The instrument's internal consistency was assessed using the Cronbach's alpha coefficient. Descriptive, comparative, and correlational statistical analyses were performed to identify differences across sociodemographic groups. Results: A majority of participants demonstrated a high acceptance of digital tools for storing and sharing health data (86.4%), which increased significantly when security measures like anonymization and encryption were assured (83.6%). Trust in data sharing varied substantially by institution: it was highest for healthcare professionals (79.1%), moderate for hospitals (51.8%), and considerably lower for the government (10%) and the pharmaceutical industry (15.5%). A statistically significant difference was found in technology adherence by age, with younger patients (18-59 years) showing higher acceptance than older adults (p = 0.024). The survey domains—self-management, adherence, and governance—demonstrated satisfactory internal consistency (Cronbach's alpha ranging from 0.75 to 0.88). Conclusions: Our findings indicate a high willingness among oncology patients to adopt digital health tools for data management, provided that robust security, transparency, and patient empowerment are central to the design. The significant trust gap between clinicians and institutions like government and industry underscores the critical need for clear communication and trustworthy governance models. To foster confidence and promote equitable access, future digital health platforms must be designed to be accessible, reliable, and centered on patient autonomy. Clinical Trial: This was an observational, cross-sectional study and did not involve a clinical intervention. Therefore, registration in a clinical trials registry (such as ClinicalTrials.gov) was not applicable. The study was conducted with the approval of the Institutional Review Board (CAAE: 70726523.3.0000.5520). All study records, including de-identified raw data, the survey instrument, and consent forms, are securely archived by the authors in accordance with institutional and ethical guidelines.
Background: The COVID-19 pandemic gave rise to a global “infodemic” in which social media platforms amplified misinformation. Despite high social media adoption rates and heavy reliance on social...
Background: The COVID-19 pandemic gave rise to a global “infodemic” in which social media platforms amplified misinformation. Despite high social media adoption rates and heavy reliance on social media for pandemic news in Arab-speaking countries, relatively little is known about the prevalence and characteristics of online Arabic COVID-19 misinformation. Objective: To capture and analyze a snapshot of the COVID-19 misinformation ecosystem in Arabic, identifying characteristics and patterns to guide future research and interventions of particular benefit to this linguistic region. Methods: We compiled a database of 234 COVID-19 misinformation claims published online from March 2020 to March 2022, sourced from four International Fact-Checking Network (IFCN)-certified Arabic fact-checking organizations. Claims were coded inductively and deductively with high inter-rater reliability, to determine misinformation type (κ = 0.88), narrative typology (κ = 0.913), framing strategies (κ = 0.72), medical jargon usage (κ = 0.794), and societal implications (κ = 0.752). All Cohen's kappa coefficients were significant at p < 0.001. Results: Facebook was the most popular platform, followed by Twitter, with regular users being the primary source of debunked claims. The most prevalent narrative typologies were COVID-19 biological aspects (origins, existence, diagnosis, prevention, transmission, and cures) (47.2%) and vaccines (30%). Fabricated/manipulated (54.9%), followed by misleading content (36.9%), were the most common misinformation types. The most frequent framing strategy involved distortion of science and medicine (29.6%), followed by entertainment/satire (23.6%), political content (18.9%), and conspiracies (13.3%). Notably, 36.3% of claims were translated from English, and only 50% of the analyzed content was moderated by the original platforms. Conclusions: Fact-checked Arabic COVID-19 misinformation exhibited distinct patterns, including heavy reliance on translated content, manipulated content, and scientific distortion as a credibility strategy, and significant gaps in platform moderation. These findings highlight the need for enhanced Arabic-language content moderation, cross-linguistic fact-checking collaboration, culturally appropriate media and health literacy interventions, and rebuilding institutional trust to address misinformation in the Arab-world effectively. Clinical Trial: N/A
Background: Amidst the COVID-19 pandemic, Action4Diabetes (A4D), a non-profit organisation collaborating with local healthcare professionals across Southeast Asia (SEA), developed HelloType1 a digital...
Background: Amidst the COVID-19 pandemic, Action4Diabetes (A4D), a non-profit organisation collaborating with local healthcare professionals across Southeast Asia (SEA), developed HelloType1 a digital educational platform for Type 1 diabetes (T1D) in regional languages. Launched sequentially in Cambodia (2021), Vietnam (2022), Thailand (2022), and Malaysia (2023) through Memorandums of Understanding (MOUs), the digital platform aimed to improve diabetes awareness, education, and access to credible local-language resources. Objective: This study aims to evaluate the usability, reach and online engagement of HelloType1 from 2021 to 2024. Methods: Website traffic data from Google Analytics (GA4) and Facebook metrics were analysed to assess user growth, traffic sources, and engagement trends across countries. Results: Total users increased by 645% between 2021 and 2022 and a further 31% between 2022 and 2023. By 2024, 78% of visits originated from search engines, 13% from social media, and 9% from direct access. Pageviews rose from 4,644 (2021) to 82,689 (2024). Facebook followers grew from 940 to 4,553, with engagement rates increasing from 8% (2022) to 29% (2024). Cambodia achieved the highest reach, while Vietnam showed strong engagement among younger female caregivers. Conclusions: HelloType1 demonstrates a scalable, low-cost digital model for delivering culturally adapted T1D education in resource-limited SEA settings. Clinical Trial: NA
Background
Cancer predisposition syndromes (CPS) are identified in approximately 10% of pediatric cancer patients, with an increasing number of affected families each year. Despite the known psychoso...
Background
Cancer predisposition syndromes (CPS) are identified in approximately 10% of pediatric cancer patients, with an increasing number of affected families each year. Despite the known psychosocial challenges faced by these families, including uncertainty in communication, genetic risk implications, and lifelong surveillance, there is limited data on the specific support needs of families in Germany.
Objective
The KiTDS-Care study aims to: (1) Conduct a comprehensive analysis of the current care landscape, psychosocial stressors, psychosocial burden and support needs of families with children/ adolescents diagnosed with CPS in Germany; and (2) Develop recommendations for improving psychosocial care based on these findings.
Methods
A mixed-methods approach will be employed. The first phase involves a systematic review to systematically gather existing literature on the psychosocial situation and support needs of CPS families. In the second phase, a cross-sectional survey of families (parents and children/ adolescents aged ≥7 years) will assess e.g. psychosocial well-being, quality of life, support needs, and care utilization. Additionally, qualitative interviews will be conducted with families and healthcare providers to explore deeper psychosocial experiences, service and care gaps. Data will be analyzed using descriptive and inferential statistics, while qualitative data will be processed through content analysis. Recommendations for psychosocial care will be derived and validated through feedback from both families and healthcare professionals.
Discussion
The study results will provide a comprehensive overview of the psychosocial situation and supportive care needs of families affected by CPS of a child/ adolescent. The results will help to improve family-centered care and psychosocial support systems. It will help identify gaps in current care practices and inform more effective approaches.
Trial registration
German Clinical Trials Register, ID: DRKS00035594, Registered on 9th December 2024
Background: Continuing Medical Education (CME) is a legal and ethical obligation for physicians in Germany. The rapid rise of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Grok rai...
Background: Continuing Medical Education (CME) is a legal and ethical obligation for physicians in Germany. The rapid rise of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Grok raises concerns about the integrity of CME assessments, as LLMs can already pass German CME tests. Objective: To determine whether the choice of document format (searchable PDF, raster PDF, vector PDF) and LLM can influence the solvability of CME test questions by LLMs above the passing threshold specified for each CME module (typically 70%). Methods: In a fully crossed within-subjects repeated-measures structure, 18 expired CME articles from three major German publishers across six specialties will be converted into three PDF formats and processed by four current LLMs (ChatGPT-5, Mistral 3.1 small, Claude Sonnet 4, Grok-4) and two predecessor versions (ChatGPT-4o and Grok-3). Each model will answer every article once per file-format condition. This results in 18 experimental conditions. The primary outcome is the proportion of correctly answered questions; secondary outcomes are pass/fail rate and efficiency. The study has been approved by the University of Witten/Herdecke Ethics Committee (reference number S-260/2025, dated 08.10.2025) and is preregistered at the Open Science Framework (DOI: 10.17605/OSF.IO/V96R5). Results: Data collection will start in January 2026 and will last approximately 4 weeks. As of December 2025, the study has been preregistered, and no results are available yet. The analyses will quantify performance differences across document formats and model generations; these findings may inform the feasibility of non-searchable document formats as a temporary measure to reduce AI-enabled cheating risks in CME contexts. Conclusions: By quantifying how document format constrains LLM performance, this study aims to evaluate simple technical safeguards that may reduce AI-assisted manipulation of CME tests and inform regulators and CME providers on balancing assessment validity, accessibility, and responsible LLM integration into postgraduate medical education. Clinical Trial: Open Science Framework DOI: 10.17605/OSF.IO/V96R5.
Background: Atopic dermatitis (AD) affects 10–20% of children and 5–10% of adults, with approximately 89% of cases being diagnosed as mild to moderate. AD influences over 200 million individuals a...
Background: Atopic dermatitis (AD) affects 10–20% of children and 5–10% of adults, with approximately 89% of cases being diagnosed as mild to moderate. AD influences over 200 million individuals around the world and is viewed as an important health problem due to its elevated prevalence, long course of disease, and heavy disease burden. Qi Wei Antipruritic Lotion is an empirical prescription formula composed of eight Chinese herbs, with purported effects of clearing heat, drying dampness, detoxification, and alleviating pruritus. While it is employed in clinical settings for pruritic dermatoses, robust evidence from high-quality clinical trials is still lacking. This study will evaluate the efficacy and safety of QW Antipruritic Lotion for the treatment of AD. Objective: This study will evaluate the efficacy and safety of QW Antipruritic Lotion for the treatment of AD. Methods: Methods and analysis: This single-center, randomized, double-blind, placebo-controlled trial will enroll 154 patients with mild-to-moderate AD from the Hospital of Chengdu University of TCM. Participants will be randomly assigned (1:1) to either the treatment group (QW Antipruritic Lotion) or the placebo control group. The trial comprises an 8-week treatment period followed by a 12-week follow-up. Efficacy will be assessed using several endpoints to measure Improvement in clinical severity. The primary outcome is the reduction in the SCORAD (Scoring Atopic Dermatitis) index. Secondary outcomes include the Eczema Area and Severity Index (EASI) scores, the Patient Self-Assessment Questionnaire (DQLI, NRS), as well as safety outcomes. A clinical dermatologist will perform assessments at baseline (week 0), weeks 4, 8, 12, 16, and 20. Results: This study will evaluate the efficacy and safety of QW Antipruritic Lotion for the treatment of AD. Conclusions: This study will evaluate the efficacy and safety of QW Antipruritic Lotion for the treatment of AD.
Background: In a significant proportion of carotid interventions, carotid graft replacement is required to achieve a successful outcome both as primary method or as bail-out solution. An exhaustive m...
Background: In a significant proportion of carotid interventions, carotid graft replacement is required to achieve a successful outcome both as primary method or as bail-out solution. An exhaustive mapping of the sparse and heterogeneous evidence available in the literature may provide a more comprehensive understanding of this topic. Objective: This scoping review aims to examine and summarize the evidence from scientific literature concerning the role of graft interposition during elective and emergent carotid interventions. Methods: This scoping review will be conducted following recommendations outlined by Levac et al and will adhere to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines for reporting. Peer-reviewed papers written in English will be searched in the following databases: PubMed/MEDLINE, Embase, Scopus, and Web of Science. The web-based systematic review platform Rayyan will be used to create a data extraction template. It will cover the following items: elective carotid endarterectomy, emergent carotid endarterectomy, carotid artery restenosis, carotid artery trauma, carotid artery aneurysm, carotid artery dissection, carotid patch infection, internal carotid artery fibrosis, carotid artery tumour. All study designs (RCTs, observational, case series) will be considered. Non-English studies, animal studies, cadaveric/anatomical-only reports, purely technical notes without clinical data. An data regarding extracranial-to-intracranial bypass will be excluded. Study selection based on title and abstract screening (first stage), full-text review (second stage), and data extraction (third stage) will be performed by a group of researchers, whereby each paper will be reviewed by at least 2 people. Any conflict regarding the inclusion or exclusion of a study and the data extraction will be resolved by discussion between the researchers who evaluated the papers; a third researcher will be involved if consensus is not reached. Results: A preliminary search of PubMed/MEDLINE, Embase, Scopus, and Web of Science was conducted, and no current or ongoing systematic reviews or scoping reviews on the topic were identified. The results of the study are expected in July 2026 Conclusions: Our scoping review will seek to provide an overview of the available evidence and identify research gaps regarding the role of graft interposition during elective and emergent carotid interventions.
Background: Autism Spectrum Disorder (ASD) is characterized by persistent difficulties in social communication, restricted interests, and sensory challenges. Although Applied Behavior Analysis (ABA) i...
Background: Autism Spectrum Disorder (ASD) is characterized by persistent difficulties in social communication, restricted interests, and sensory challenges. Although Applied Behavior Analysis (ABA) is widely used, traditional interventions often face challenges, such as high costs, limited access to qualified therapists, and balancing structured therapy with individual needs. Recent advances in consumer-grade virtual reality (VR) and artificial intelligence (AI) offer opportunities to design personalized, immersive interventions aligned with naturalistic developmental behavioral intervention (NDBI) principles. Objective: This study aimed to design, develop, and evaluate an immersive VR game, the “Elevator Game” for verbal requesting and social initiation, to determine its feasibility, acceptability, and preliminary behavioral impact on children with ASD. Methods: Three children with autism and limited verbal skills participated in home-based VR sessions consisting of 10-15 minutes of gameplay followed by breaks. Results: Results suggest the intervention is feasible, well tolerated, and associated with increased spontaneous verbal requesting. Conclusions: AI-assisted VR interventions integrating ABA and NDBI principles are feasible, engaging, and potentially effective for children with ASD, including those with limited progress in traditional therapy. Personalized reinforcers, immersive engagement, and sensory-adaptive environments appear critical for success. Findings support further development and evaluation in larger trials.
Background: The growth of patient-facing health technology has the potential to transform the delivery and receipt of patient-centered primary care. However, successful integration of data from these...
Background: The growth of patient-facing health technology has the potential to transform the delivery and receipt of patient-centered primary care. However, successful integration of data from these digital tools into clinical workflows depends not only on technical efficacy, but also on usability across diverse patient populations. To ensure the successful integration of digital tools, Tech Testing Panels (TTPs) can assess usability and provide feedback. Objective: This study aimed to assess technology usage and literacy among adult primary care patients that opted in a TTP and compare these measures between English-preferred and Chinese-preferred speaking patients. Methods: We conducted a cross-sectional online survey from April to July 2024 at an urban academic primary care based TTP composed of adult patients that use the patient portal and spoke English and/or Chinese. The survey assessed socio-demographic characteristics and technology usage and literacy, including comfort with app installation, video chat setup, and problem-solving tech issues. Respondents received a $5 online gift card for completion. Bivariate analyses were conducted using Pearson’s chi-squared and Fisher’s exact tests to compare responses by preferred language. Results: Of the surveys distributed, the response rate for surveys in English was 53.7%, while the response rate for surveys in Chinese was approximately 27.0% with a total sample size of 222 respondents. Respondents had a mean age of 61.6 years, with nearly half aged 65 or older. A majority had high educational attainment and household incomes. Most respondents strongly agreed that they could install applications (85.5%) and able to initiate video chats independently (82.4%). Internet access was nearly universal (99.1%), and patient portal usage was high (99.1%) with most accessing the portal via smartphones or tablets (54.8%). However, Chinese-preferring respondents reported significantly lower technology literacy across multiple domains compared to English-preferring respondents, including lower confidence in using applications (64.5% vs 89.0%, P=.001) and resolving technical issues (38.7% vs 60.0%, P<.001). Conclusions: While technology usage was high in this sample of adult primary care patients in a TTP, disparities by preferred language in technology literacy persist. Chinese-preferring patients were less confident in navigating digital tools, despite similar technology usage. These findings underscore the importance of TTPs with diversity in technology literacy to support inclusive development of culturally and linguistically responsive patient-facing digital tools. Addressing barriers identified among end users with different degrees of technology literacy will be essential to ensuring equitable adoption of digital health tools and supporting inclusive innovation in primary care.
Background: Accurate assessment of surgical margins is essential in the treatment of squamous cell carcinoma (SCC) of the upper aerodigestive tract or cutaneous origin, as well as basal cell carcinoma...
Background: Accurate assessment of surgical margins is essential in the treatment of squamous cell carcinoma (SCC) of the upper aerodigestive tract or cutaneous origin, as well as basal cell carcinoma (BCC). Intraoperative frozen-section analysis is the current standard but is time-consuming and requires coordination among surgical and pathology teams. Reflectance confocal microscopy offers rapid, real-time evaluation of surgical margins and may provide diagnostic information comparable to frozen-section analysis, while enabling the development of a reference atlas for tumor visualization. Objective: The HISTOBLOC study aims to evaluate the concordance between confocal microscopy and intraoperative frozen-section examination for assessing surgical margins in SCC and BCC. A secondary objective is to compile a confocal imaging reference atlas to document tumor features and support consistent interpretation Methods: HISTOBLOC is a prospective, monocentric, randomized pilot study conducted at the Institut de Cancérologie de Lorraine, a nonprofit comprehensive cancer institute. Patients undergoing surgical excision for SCC or BCC have their margins assessed using both confocal microscopy and frozen-section analysis. The study measure concordance between the two methods and the time required for intraoperative margin assessment. Results: Patient recruitment for the study began on July 26, 2023, and was completed in June 4, 2025. All patients were enrolled according to the approved study protocol. Experimental procedures have been conducted in all recruited participants, and data collection has been completed. The results are currently undergoing statistical analysis and interpretation. Conclusions: This protocol describes a study designed to determine whether confocal microscopy can provide rapid, reliable intraoperative margin assessment comparable to frozen-section analysis, and to generate a reference atlas for clinical and research use. Clinical Trial: ClinicalTrials.gov; NCT05935995; https://clinicaltrials.gov/study/NCT05935995
Background: Inclusive physical education (PE) plays an important role in promoting participation and development among students with different abilities. However, many teachers do not have adequate to...
Background: Inclusive physical education (PE) plays an important role in promoting participation and development among students with different abilities. However, many teachers do not have adequate tools to modify PE activities to meet these diverse needs. In addition, parents are essential partners, as their involvement helps to reinforce strategies and provide useful information about their children. While online platforms provide a practical way to deliver such solutions, only a few are intentionally created to support both teachers and parents in implementing inclusive PE learning. Objective: This study aimed to develop an online platform that provides inclusion strategies for PE teachers and to examine how teachers and parents perceived its usability, acceptability, and overall usefulness using a mixed methods approach. Methods: A mixed methods research design was adopted in two phases. Phase 1 involved the development of the platform through expert consultation and literature review with feedback from educators. Phase 2 focused on user evaluation and involved usability testing using the System Usability Scale (SUS) and the Questionnaire for User Interaction Satisfaction (QUIS), alongside task performance metrics. Semi-structured interviews were also conducted with PE teachers (n=8) and parents (n = 8). Quantitative data were analyzed descriptively and with inferential statistics, while qualitative responses were coded thematically and the results were integrated using joint display. Results: All participants successfully completed the assigned tasks except few instances of minor difficulty during task completion (14 total errors across 136 task attempts). The Platform satisfaction scores were good as reported by PE teachers (8.03±1.59) and parents (8.13±1.06). QUIS scores were high among PE teachers (overall reaction: 8.03 ± 1.59; learning: 9.69 ± 0.40) and parents (overall reaction: 8.13 ± 1.06; learning: 8.63 ± 1.57). Mixed-methods integration showed strong convergence between high satisfaction scores and positive professional value quotes. However, divergence was noted in the learning domain, as high scores contrasted with reported uncertainty among new users. Lower system capability scores from parents (6.69 ± 2.25) were consistent with qualitative concerns about navigation inefficiencies and slow platform response. Desktop design was praised, while the mobile view was considered visually dense. Conclusions: The online platform provides strong usability and satisfaction among PE teachers and parents. Future work will involve improved implementation and evaluation of its impact on students’ participation outcomes.
Background: Young people increasingly experience mental health challenges and often turn to the internet for support. Self-guided digital mental health promotion services have become widely used resou...
Background: Young people increasingly experience mental health challenges and often turn to the internet for support. Self-guided digital mental health promotion services have become widely used resources for youth seeking help and guidance. These platforms offer accessible, anonymous support, yet little is known about the concerns young people articulate when engaging with them. Objective: This study examines inquiries submitted to a digital letterbox on one of Denmark’s most widely used digital mental health promotion services, Mindhelper.dk, to identify recurring themes in young people's inquiries about mental health and well-being. In addition, it explores how gender influences these experiences in the context of engagement with a self-guided digital platform. Methods: Employing an inductive analysis strategy and a grounded theory–inspired coding framework, this study analyzes a dataset of 2,523 inquiries submitted to the Mindhelper letterbox between March 2016 and August 2023. The archive provides rare, unsolicited first-person accounts from young people in moments of emotional vulnerability, providing immediate and authentic insights into their mental health concerns. Results: The analysis identifies 17 recurring themes that reflect the mental health challenges young people seek help for. These themes are grouped into three overarching analytical categories: Social Relations and Social Contexts, Emotional Life, and Body and Illness, with the first two dominating the material. The most prominent themes include Sociality, Love Life, Unease, Self-Criticism and Insecurity, and Communication and Reaching Out for Support. The intersection of themes underscores the central role of social relationships in young people's mental health and well-being, with frequent co-occurrence of inquiries addressing both Love Life and Sociality. Regardless of gender, users frequently inquire about Sociality and Love Life, indicating shared concerns related to social relationships. However, girls were markedly overrepresented among inquirers, highlighting potential gender differences in help-seeking behavior. Conclusions: Social relationships play a central role in young people's lives, yet many also face emotional struggles, particularly related to anxiety, self-esteem, and despair. The letterbox serves as an important help-seeking channel for youth who may lack access to support elsewhere, with a marked overrepresentation of girls, indicating gender patterns in help-seeking behavior. This study provides novel insights into the mental health challenges Danish youth face and their engagement with digital support services, informing the design of targeted, gender-sensitive self-help content and guiding future efforts to promote well-being and reduce barriers to help-seeking.
Background: The nursing field is facing unprecedented challenges driven by an explosion of heterogeneous data, persistent data silos, and increasing complexity in clinical decision-making. These issu...
Background: The nursing field is facing unprecedented challenges driven by an explosion of heterogeneous data, persistent data silos, and increasing complexity in clinical decision-making. These issues underscore the urgent need for a systematic, integrative framework to organize and leverage nursing information effectively. Objective: This paper aims to conceptualize “Nursing Omics” a novel, multi-omics inspired integrative framework for future-oriented nursing informatics. Methods: Using a theoretical development approach, we draw on paradigms from genomics, proteomics, and other omics disciplines, integrating core principles from nursing informatics, systems science, and data science to construct a coherent conceptual architecture. Results: We propose a formal definition of Nursing-Omics and introduce a multidimensional integrative framework comprising the Intervenomics, Responsomics, Behaviomics, Exposomics, Experienomics. The framework is grounded in four foundational principles: holism, dynamism, data-driven insight, and individualization. Conclusions: Nursing-Omics offers a transformative paradigm for the systematic integration of nursing data, enabling precision decision-making, accelerating knowledge generation, and advancing intelligent, person-centered care. It represents a critical direction for the evolution of nursing informatics in the era of digital health. Clinical Trial: NO
Introduction: Large Language Models (LLMs) are increasingly applied in medical contexts, offering benefits for clinical decision-making, education, and patient communication. However, bias in LLM outp...
Introduction: Large Language Models (LLMs) are increasingly applied in medical contexts, offering benefits for clinical decision-making, education, and patient communication. However, bias in LLM outputs may exacerbate healthcare disparities and compromise trust. This systematic review will examine how bias is identified, measured, and mitigated in healthcare use cases of medical LLMs.
Methods and Analysis: A systematic search will be conducted in EMBASE, MEDLINE, PsycINFO, PubMed, ACL Anthology, ACM Digital Library, ArXiv, MedRxiv, and BioRxiv. Studies will be included if they investigate bias in LLM applications within healthcare, report experimental findings, and are published in English from 2017 onwards. Grey literature with adequate methodological detail will also be considered. Findings will be synthesised using a narrative approach due to anticipated methodological heterogeneity.
Ethics and Dissemination: As a secondary analysis of published literature, ethical approval is not required. Results will be disseminated through peer-reviewed publications, academic conferences, and open-access repositories to inform responsible LLM deployment in healthcare.
Registration Details: This protocol has been registered in PROSPERO (ID: 638943) https://www.crd.york.ac.uk/PROSPERO/view/CRD420250638943 and OSF.
Background: After non-curative resection for early gastric cancer (EGC) with endoscopic submucosal dissection (ESD), gastrectomy with lymphadenectomy is generally recommended. However, most patients a...
Background: After non-curative resection for early gastric cancer (EGC) with endoscopic submucosal dissection (ESD), gastrectomy with lymphadenectomy is generally recommended. However, most patients are found to have no residual cancer in the stomach or regional lymph nodes, while surgery carries a considerable risk of postoperative complications. In Western settings, patients with EGC are often elderly and have concomitant comorbidities. Objective: In this study, we aim to assess the feasibility and safety of indocyanine-green (ICG) - guided lymphadenectomy with or without laparoscopic and endoscopic cooperative surgery (LECS) following non-curative ESD for EGC. Methods: A single-center phase 1 prospective trial. Patients with EGC treated with ESD within the expanded criteria will be considered for inclusion, provided the resection was non-curative (eCuraC2). For patients with radically resected EGC, ICG-guided lymphadenectomy alone will be performed. In those with a non-radically resected EGC, ICG-guided lymphadenectomy and LECS will be performed. The primary objective is to evaluate the safety of the procedure, defined as Clavien-Dindo grade III or more. The secondary endpoints include other complications, operation time, number of positive lymph nodes, short-term mortality, and health-related quality of life. Results: As of January 9th, 2026, no patients have yet been recruited to the trial. Conclusions: ICG-guided lymphadenectomy with or without LECS is an appealing and potentially promising treatment strategy following non-curative ESD for EGC. To the best of our knowledge, no previous studies from the Western world have been conducted on this subject. Clinical Trial: ClinicalTrials.gov identifier: NCT07295002 Registered December 18th, 2025. URL: https://clinicaltrials.gov/study/NCT07295002?term=NCT07295002&rank=1
Background: The anesthesiology healthcare workers across various hospital levels in China were invited to participate in an electronic survey. Objective: The study aimed to assess the prevalence and i...
Background: The anesthesiology healthcare workers across various hospital levels in China were invited to participate in an electronic survey. Objective: The study aimed to assess the prevalence and impact of occupational burnout among anesthesiologists and anesthetic nurses in China, identifying key factors and providing a scientific basis for intervention strategies. The importance of this research lies in addressing the critical shortage of medical personnel in anesthesiology and its impact on healthcare quality. Methods: The primary goal was to provide a comprehensive analysis of occupational burnout among anesthesiologists and nurses in China using an electronic questionnaire. The questionnaire included assessments of occupational burnout, demographic and work-related information, work stress, interpersonal relationships, and health status. Results: A total of 1,465 participants were included across China. The response rate was 96.30%, with an overall burnout rate of 79.52%. Anesthesiologists had a burnout rate of 82.51%, and anesthetic nurses had a rate of 72.85%, showing a significant difference (P = 0.000). The prevalence of high emotional exhaustion and depersonalization was 45.80%, with anesthesiologists at 50.30% and nurses at 35.76%. Multivariable logistic regression analysis identified independent risk factors associated with burnout, including work environment, colleague relationships, and sleep quality for anesthesiologists, and experience, hospital level, and work intensity for anesthetic nurses. Conclusions: Occupational burnout is prevalent among anesthesiology professionals in China, with significant implications for individual well-being and patient care. The study's findings call for targeted interventions, such as improving work environments, enhancing education and training, and establishing support systems to mitigate burnout and promote work-life balance. Future research should focus on developing and evaluating effective intervention measures to ensure the well-being of medical professionals and the quality of healthcare services.
Background: The global prevalence of pressure injuries is high and can cause severe infections, or death. Accurate staging is vital for effective intervention. Deep learning streamlines pressure injur...
Background: The global prevalence of pressure injuries is high and can cause severe infections, or death. Accurate staging is vital for effective intervention. Deep learning streamlines pressure injury assessment, enhances efficiency, and yields practical, accurate results. This scoping review summarized research on multi-modal deep learning for intelligent pressure ulcer recognition. Objective: It systematized models, training methods, and outcomes to identify the best systems for rapid detection and automated staging of pressure ulcers. Enhancing the timeliness, accuracy, and objectivity of diagnosis is the goal. Methods: We searched the following databases and sources: PubMed, the Cochrane Library, IEEE Xplore, and Web of Science. The scoping review was conducted in accordance with the JBI Scoping Review Methodology Group’s guidance and reported following Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews guidelines. The study protocol was registered with the International Prospective Registry of Systematic Reviews (PROSPERO) on 12 December 2025 (registration number: CRD420251251573). Results: 15 articles were included: 26 models were involved, including AlexNet; VGG16; ResNet18; DenseNet121; SE-Swin Transformer; Cascade R-CNN; vision transformer (ViT); ConvNextV2; EfficientNetV2; Meta Former; TinyViT; CCM; BCM; ResNext + wFPN; SE-Inception; Mask-R-CNN; SE-ResNext101; Faster R-CNN; ResNet50; ResNet152; DenseNet201; EfficientNet-B4; YOLOv5; Inception-ResNet-v2; InceptionV3; MobilNetV2. The training methodology for intelligent pressure ulcer recognition models involves establishing an image database, processing images, and constructing the recognition model. Different models exhibit varying accuracy rates in staging pressure ulcers, with overall accuracy fluctuating between 54.84% and 93.71%. The DenseNet121 model achieved the highest recognition accuracy of 93.71%, while VGG16 was the most widely applied. The same model demonstrated significant variations in recognition accuracy across different studies. Conclusions: The multi-modal and deep learning-based intelligent recognition model for pressure injuries demonstrates high overall accuracy, enabling rapid automated staging of such injuries. Future research may explore optimized intelligent assistance systems to enhance the accuracy, objectivity, and efficiency of pressure injury diagnosis.
Background: Prolonged exposure to computer screens has been associated with visual fatigue and reduced visual comfort, which may in turn affect cognitive performance and concentration. While blue-enri...
Background: Prolonged exposure to computer screens has been associated with visual fatigue and reduced visual comfort, which may in turn affect cognitive performance and concentration. While blue-enriched screen light and display settings are known to influence visual strain, their impact on short-term task performance under different backlight configurations remains insufficiently quantified from a human factors perspective. Objective: This study aimed to evaluate the effects of different computer screen backlight settings on user concentration, using typing speed as a quantitative proxy for task performance. Methods: A total of 22 adult participants performed standardized reading and typing tasks under different screen backlight conditions, including black text on a white background and white or orange text on a dark background. Screen illuminance and spectral characteristics were measured using a calibrated spectrometer. Typing speed was recorded after controlled reading periods, and statistical analyses were conducted to assess changes in performance across conditions. Results: Typing speed decreased significantly after 30 minutes of reading under a traditional black text on white background. In contrast, switching to a dark background with white text resulted in a significant increase in typing speed. Further improvement was observed when orange text was used on a dark background. Myopic diopter showed no significant correlation with changes in typing performance. Conclusions: Lower screen illuminance achieved through dark background display settings was associated with improved short-term task performance. These findings suggest that display configurations emphasizing reduced luminance may help maintain concentration during computer-based tasks and have implications for visual ergonomics and human-centered display design. Clinical Trial: Not applicable.
Background: Background Heart failure (HF) is a refractory disease with a global public health issue that is continuously increasing. Metabolic syndrome plays a crucial role in prevalence and mortality...
Background: Background Heart failure (HF) is a refractory disease with a global public health issue that is continuously increasing. Metabolic syndrome plays a crucial role in prevalence and mortality of HF. The triglyceride-glucose (TyG)-related obesity indices, such as body mass index (BMI), a body shape index (ABSI), and waist-to-height ratio (WHtR), have been recognized as a significant predictor of cardiovascular disease risk. Nevertheless, the predictive value of these makers for HF prevalence and their association between all-cause mortality in general populations remains unclear. Objective: in this study, we aimed to evaluate their association with prevalence and all-cause mortality among HF patients using machine learning techniques. Methods: The U.S. National Health and Nutrition Examination Survey (NHANES) (2001-2018) database provided all the data for this study. The status of the participants was followed through December 31, 2019. Participants were categorized into a non-HF group and a HF group. Weighted binary logistic regression was performed to evaluate the independent associations between the TyG-related obesity indices and HF. Meanwhile, subgroup analysis was performed to confirm the reliability of the associations observed among different population. Restricted cubic spline (RCS) models were utilized to delineate whether the relationship is non-linear. Random forest analysis and Boruta algorithm were adopted to assess the predictive value of each biomarker for the prevalence of HF. Receiver operating characteristic (ROC) curves were generated to assess the predictive performance. Additionally, those biomarkers were categorized into two groups based on threshold derived from the maximally selected rank statistics (MSRS). Kaplan-Meier survival analysis and weighted Cox regression models were employed to explore the association between each TyG-related obesity indices and all-cause mortality among HF patients. Results: 40,908 participants (1,174 HF patients) were encompassed in this retrospective study. In the fully adjusted model, TyG-BMI, TyG-ABSI, and TyG-WHtR exhibited higher odds ratio (OR) than TyG alone. TyG-ABSI exhibited the strongest association both as a continuous variable and across quartiles, demonstrating a significant near-linear positive dose-response relationship with HF risk. RCS analysis further confirmed a linear relationship between TyG-related obesity indices and HF risk. The ROC curve analysis demonstrated that TyG-ABSI had the best predictive performance for HF risk (AUC: 0.721, 95% CI: 0.690–0.736). Random forest analyses and Boruta algorithm identified those biomarkers as an important clinical feature. Subgroup analysis revealed no significant interactions across all subgroups, except for age. During a median follow-up of 9 years, a total of 566 deaths were documented, when stratified by the MSRS-derived optimal cutoff value, Kaplan-Meier survival analysis and Cox regression model demonstrated significantly worse overall survival for the higher TyG-ABSI group (HR:1.44, 95% CI=1.11-1.86, P=0.006), each standard deviation increment in TyG-ABSI was associated with an 11% increment all-cause mortality risk among HF patients. Conclusions: Our study suggests that TyG-BMI, TyG-ABSI and TyG-WHtR are associated with increased odds of HF in the U.S. TyG-ABSI demonstrate the best predicted performance and expect to become more effective metrics for improving risk stratification. TyG-ABSI is independently associated with increased all-cause mortality risk in HF patients, highlighting its potential as a useful tool in aiding personalized management.
Background: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that poses complex challenges for persons with Parkinson’s (PwP), informal caregivers, and healthcare professionals...
Background: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that poses complex challenges for persons with Parkinson’s (PwP), informal caregivers, and healthcare professionals. With growing interest in digital and predictive Artificial Intelligence (AI) tools for disease management, understanding the needs and digital readiness of these stakeholder groups is crucial. Objective: This work aims to (1) identify digital practices for PD management among PwP, at‑risk individuals, caregivers, and healthcare professionals; (2) compare these practices across groups; (3) explore stakeholder desires for AI-based tools; and (4) assess alignments and gaps to inform tailored AI solutions. Methods: An anonymous cross-sectional online survey of exploratory nature was distributed (from Dec. 2024 to Oct. 2025) in five languages. It was completed by 255 respondents. Descriptive statistics summarized responses to 41 questions, including stakeholder-specific items. Chi-square tests were performed to examine stakeholder differences in desired AI-features. Results: : Interest in predictive AI was high across stakeholder groups. Symptom-tracking was the most desired feature (selected by >76% of respondents); however, stakeholder priorities diverged in other areas. Healthcare professionals rated improving patient and informal caregiver engagement as significantly more important than PwP did, χ²(1, N=205)=34.78, p<.001, Cramer’s V=0.41. Despite considerable interest, the reported use of digital tools was limited, as most PwP did not use symptom-tracking apps or wearables, nor were they currently monitoring their condition, although many expressed intentions to begin. Conclusions: While AI tools were viewed positively across groups, there were significant gaps in current usage. Stakeholder-specific preferences, including informal caregiver engagement and preventive lifestyle guidance, highlight the importance of tailored design. These findings offer early-stage insight to guide development of future AI-based solutions for PD.
Background: : Stroke remains a leading cause of motor disability globally. Functional electrical stimulation (FES) has emerged as a promising neurorehabilitation modality, but its comparative efficacy...
Background: : Stroke remains a leading cause of motor disability globally. Functional electrical stimulation (FES) has emerged as a promising neurorehabilitation modality, but its comparative efficacy, optimal application parameters, and long-term sustainability remain incompletely characterized. Objective: To synthesize evidence from randomized controlled trials and systematic reviews published between 2021 and 2025 regarding the effectiveness of FES interventions for upper and lower limb motor recovery in post-stroke populations. Methods: A comprehensive literature search was conducted across PubMed, Scopus, Web of Science, and Cochrane Library databases. Studies were selected based on PRISMA 2020 criteria. Quality appraisal was performed using the Physiotherapy Evidence Database (PEDro) scale and Cochrane Risk of Bias 2 tool. Quantitative synthesis was conducted using random-effects meta-analyses. Results: Twenty-seven studies (n=2,309 stroke participants) were included, encompassing diverse FES modalities: manually controlled, electromyography-triggered, brain-computer interface-controlled, and hybrid systems. Meta-analytic findings demonstrated that FES combined with occupational therapy produced significantly greater improvements in upper limb motor function (Fugl-Meyer Assessment: mean difference [MD] = 5.08, 95% confidence interval [CI] 2.46-7.71) compared to standard care alone. Brain-computer interface-controlled FES achieved superior outcomes (standardized mean difference [SMD] = 0.73, 95% CI 0.26-1.20) particularly when paired with action observation tasks. For lower limb recovery, FES reduced foot drop severity and enhanced gait parameters, with 52% of participants achieving independent walking. Cost-effectiveness analysis demonstrated long-term value (£15,406 per quality-adjusted life year). Adverse events were minimal, primarily limited to temporary skin irritation. Conclusions: FES represents a viable, evidence-supported adjunctive intervention for post-stroke motor recovery across subacute and chronic phases. Emerging technologies integrating brain-computer interfaces and artificial intelligence offer enhanced personalization and efficacy. Future research should prioritize real-world implementation trials, long-term follow-up protocols, and mechanisms underlying neuroplastic adaptations.
Background: Self-assessment is a key requirement for lifelong learning in medicine. Evidence from gender-related research indicates that important moderators affecting self-assessment are influenced b...
Background: Self-assessment is a key requirement for lifelong learning in medicine. Evidence from gender-related research indicates that important moderators affecting self-assessment are influenced by gender. Therefore, systematic gender differences in the accuracy of self-assessment may be assumed. Objective: The present study aims to examine gender differences in medical students’ self-assessment. Specifically, this study addresses two research questions: (1) Are there systematic gender differences in medical students' self-assessment accuracy? (2) What is the magnitude of these gender differences when accounting for academic progress and knowledge? Methods: Medical students from 3 cohorts at the Medical School OWL were surveyed in 3 waves between April 2023 and April 2024 during the Progress Test Medicine (PTM). Prior to answering the test, students were asked to indicate the percentage of the PTM questions they expected to answer correctly in five knowledge areas. Self-assessment accuracy was calculated as the difference between the subjective self-assessment and the objective test score. Linear mixed models (LMMs) were used to analyze the influence of gender on students’ self-assessment accuracy while accounting for academic progress and knowledge. Results: A total of 165 students participated in this study (66.58% women, 33.42% men; age: M=21.96 years, SD=3.61). Across all models, female students rated themselves significantly less accurately than their male peers. The observed gender effect ranged from -3.74 to -6.08 percentage points. Conclusions: The results indicated systematic gender differences in medical students’ self-assessment, in favor of male students, with a magnitude comparable to the average knowledge acquired in an entire semester of study. In view of the potentially negative consequences of inaccurate self-assessment, targeted support for developing realistic self-assessment during medical studies may be particularly beneficial for female students.
Background: Mobile health (mHealth) and online video are increasingly central to cardiology education and point-of-care decision support. However, little is known about how simple design choices—suc...
Background: Mobile health (mHealth) and online video are increasingly central to cardiology education and point-of-care decision support. However, little is known about how simple design choices—such as mobile-first web layouts and captioned video—function as equity enablers across income settings when examined with multi-country learning analytics. Objective: This exploratory ecological study used real-world, cross-platform learning analytics from a French-language cardiology mHealth education initiative to quantify how mobile web access and captioned YouTube viewing varied across World Bank income groups and assess whether greater reliance on these access enablers was associated with poorer engagement. Methods: We analyzed country-level analytics from the École Numérique de Cardiologie (ENC) mobile-optimized website and companion YouTube channel over a 2-year period. Countries were grouped as high-, middle-, or low-income. Primary access indicators were the share of website sessions from mobile devices and the share of YouTube watch time with subtitles enabled (any language). Engagement outcomes included website bounce rate and time on page and YouTube average view duration, audience retention, and intentional views. We summarized medians by income group and explored associations using nonparametric tests, Spearman correlations, and median quantile regression. Results: Thirty-four countries contributed data (13 high-income, 14 middle-income, 7 low-income). Caption-enabled watch time showed a marked income gradient, increasing from 18.8% in high-income to 38.7% in middle-income and 60.9% in low-income groups, a caption equity gap of 42.1 percentage points between low- and high-income settings. Median mobile share of website sessions also rose with decreasing income (36.5%, 63.3%, and 81.4%, respectively). Income groups with higher caption use also had a higher share of intentional views and younger audiences. Greater reliance on mobile access was not independently associated with higher bounce rate or shorter time on page in quantile regression models. Conclusions: In this multi-country mHealth learning analytics case study, mobile-first web access and captioned video were used most intensively in lower-income settings and were not associated with penalties in basic engagement metrics. These findings support treating mobile-optimized design and systematic captioning, including non-French subtitles, as core, low-cost components of equitable digital cardiology and mHealth education, and suggest that simple analytics indicators can serve as equity-focused monitoring tools for global mHealth initiatives.
Background: Over the past decade, Europe has expanded school-based mental health prevention programs, yet the prevalence of mental disorders among children and adolescents remains high and has risen f...
Background: Over the past decade, Europe has expanded school-based mental health prevention programs, yet the prevalence of mental disorders among children and adolescents remains high and has risen further since the COVID-19 pandemic. Digital interventions have proliferated, yet implementation gaps persist, limiting their impact. Objective: To synthesize quantitative, qualitative, and mixed-methods evidence on the facilitators and barriers to implementing digital and analog universal school-based mental health promotion programs for children and adolescents (ages 5–19) in European primary and secondary schools, and to examine how implementation quality is assessed and the role of the digital environment. Methods: A three-step search will be conducted across the interfaces PubMed, EBSCO, Clarivate Analytics, PubPsych, Fachportal Pädagogik, Google Scholar, relevant preprint servers, and the reference lists of all included sources of evidence. A first systematic search was completed in January 2026. Titles/abstracts and full texts will be screened independently by two reviewers, with disagreements resolved through discussion or a third reviewer. Methodological quality will be appraised by assessing the trustworthiness, relevance, and results of published papers. Data will be extracted using standardized JBI forms and analyzed separately into quantitative (descriptive statistics, possible meta-analysis) and qualitative (meta-aggregation) components, followed by a convergent, segregated synthesis to integrate findings. No deviations from the JBI mixed-methods systematic review methodology are anticipated. Results: A comprehensive PubMed search was conducted on January 6, 2026, and 614 records were retrieved after applying filters. Results are expected to be published by December 2026. Conclusions: By integrating quantitative and qualitative findings, this review will identify the key facilitators and barriers influencing the real‑world uptake of digital and analog school‑based mental‑health programs across Europe. Mapping these determinants onto implementation frameworks such as CFIR and RE‑AIM and linking them to program outcomes will yield actionable recommendations that can close the implementation gap, bolster sustainability, and improve mental‑health outcomes for children and adolescents in the post‑COVID era.
Background: Human papillomavirus (HPV) remains the principal cause of cervical cancer, yet population-level awareness and knowledge in many Nigerian settings remain limited. Understanding the patterns...
Background: Human papillomavirus (HPV) remains the principal cause of cervical cancer, yet population-level awareness and knowledge in many Nigerian settings remain limited. Understanding the patterns and predictors of HPV awareness and knowledge is essential for strengthening Nigeria’s HPV vaccination rollout and reducing preventable cervical cancer morbidity. Objective: To describe respondents’ demographic characteristics; assess levels of awareness and knowledge of HPV, cervical cancer, and the HPV vaccine; examine associations between sociodemographic variables and awareness/knowledge; and identify independent predictors of HPV awareness and knowledge. Methods: A community-based cross-sectional survey was conducted among 238 caregivers of girls aged 9-14 years in Port Harcourt Local Government Area. Data on demographics, HPV awareness, knowledge indicators, and information sources were collected using a structured questionnaire. Descriptive statistics, chi-square tests, and multivariable logistic regression were used to assess associations and predictors. Statistical significance was set at p < 0.05. Results: Respondents showed wide demographic diversity across age, religion, education, occupation, and income. Overall awareness of HPV was low (45.4%), and knowledge was predominantly poor (78.6%). Misconceptions were common, with many attributing HPV to poor hygiene or skin infections. Only 39.8% correctly identified sexual contact as the mode of transmission, and knowledge of vaccine dosage was inconsistent. Informal channels, religious institutions, social media, and family networks were the primary sources of information, whereas health workers accounted for only 8.3%. Most sociodemographic factors showed no significant association with awareness or knowledge, indicating widespread deficits across groups. Occupation was the only variable significantly associated with awareness (p = 0.011). Logistic regression showed higher odds of awareness among respondents aged 26-36 years (OR 2.26, p = 0.039) and lower odds among those practicing Traditional religion (OR 0.41, p = 0.033). Civil/public servants showed reduced odds of awareness (OR 0.44, p = 0.048). Conclusions: HPV awareness and knowledge are markedly low and broadly distributed across demographic groups. Widespread misconceptions reflect structural failures in health communication. Strengthen community-based and health worker-led HPV education; embed messaging within religious and social structures; and implement targeted, culturally adapted communication strategies to improve vaccine uptake. Significance Statement: Addressing pervasive knowledge gaps is vital for achieving effective HPV vaccination coverage and reducing cervical cancer burden in Nigeria.
Introduction
Acute leukemia poses a significant health burden globally, necessitating a deeper understanding of its etiological factors. This study investigates the potential link between blood group...
Introduction
Acute leukemia poses a significant health burden globally, necessitating a deeper understanding of its etiological factors. This study investigates the potential link between blood groups, Rh factor, and the incidence of acute leukemia to enhance knowledge and guide personalized treatment strategies.
Methods
A cross-sectional analytical study was conducted at Imam Khomeini Hospital in Urmia from 2012 to 2018, including patients with acute leukemia. Data on blood groups, Rh factor, and demographic variables were collected and analyzed using SPSS software. Statistical tests were employed to determine associations between blood groups and leukemia risk.
Results
The study found no significant relationship between ABO blood groups and acute leukemia, consistent with previous research. However, differences in Rh factor distribution were observed between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) patients, warranting further investigation.
Discussion
The complexity of leukemia etiology is highlighted by the multifactorial nature of the disease, where genetic, environmental, and possibly epigenetic factors interact. Future research should focus on larger sample sizes and diverse populations to elucidate the intricate mechanisms underlying leukemia susceptibility.
Conclusion
While ABO blood groups may not significantly impact acute leukemia risk, variations in Rh factor distribution among leukemia subtypes suggest a need for continued exploration. Comprehensive studies considering diverse factors are essential to unravel the complexities of leukemia development.
Introduction
Immune thrombocytopenic purpura (ITP) is an acquired thrombocytopenia syndrome characterized by platelet destruction due to antiplatelet antibodies. Corticosteroids are the first-line tr...
Introduction
Immune thrombocytopenic purpura (ITP) is an acquired thrombocytopenia syndrome characterized by platelet destruction due to antiplatelet antibodies. Corticosteroids are the first-line treatment for adult patients with ITP. This study compares the effects of high-dose dexamethasone versus prednisolone in ITP treatment.
Materials and Methods
This open-label clinical trial involved patients over 18 years diagnosed with ITP (based on ASH criteria) who had not received prior treatment. Participants were randomly assigned (1:1) to receive high-dose dexamethasone (HD-DXM) or prednisolone (PDN). The dexamethasone group received 40 mg intravenously for 4 consecutive days, while the PDN group received 1 mg/kg oral prednisolone for 4 weeks. Daily complete blood counts were obtained to assess treatment response, defined as a platelet count above 30,000/μL.
Results
A total of 36 patients were evaluated, with 18 in each treatment group. Patients receiving dexamethasone showed significantly reduced hospitalization duration and faster time to reach platelet counts above 30,000/μL (P=0.01 and P=0.002, respectively).
Conclusion
High-dose dexamethasone significantly decreases the time to initial response and hospitalization duration in ITP patients compared to prednisolone.
Classical evolutionary theory, notably Riedl’s concept of canalization, suggests that human lifespan is constrained by deeply entrenched developmental architectures, implying that aging is an immuta...
Classical evolutionary theory, notably Riedl’s concept of canalization, suggests that human lifespan is constrained by deeply entrenched developmental architectures, implying that aging is an immutable biological reality. However, rapid advancements in artificial intelligence (AI) from 2023 to 2025 have begun to challenge this pessimism. This viewpoint synthesizes recent developments to argue that AI is reframing aging from a biological mystery into a tractable engineering challenge. We examine two primary frontiers: the use of autonomous AI agents and generative models to discover geroprotective interventions, including the identification of compounds like ouabain via large-scale omics re-analysis; and the maturation of multi-modal “aging clocks” that utilize deep learning to enable precision diagnostics and personalized healthspan optimization. While acknowledging significant limitations regarding safety, translation from animal models, and the risks of commercial hype, we conclude that the integration of AI with mechanistic geroscience offers a plausible pathway toward a proactive, engineering-based approach to human longevity.
Biobanks are recognised as lucrative health research resources due to their extensive and in-depth data availability, which allows researchers to draw correlations between various genetic, lifestyle,...
Biobanks are recognised as lucrative health research resources due to their extensive and in-depth data availability, which allows researchers to draw correlations between various genetic, lifestyle, and health information and future disease incidence. As prospective data sources collect genetic and lifestyle information for several hundred thousand participants across various age categories, biobanks are important datasets in designing novel healthcare approaches. Within the realm of cardiometabolic ageing, which refers to the age related decline in the function of cardiovascular and metabolic systems, the conceptualisation of a systems medicine-based approach known as P4 (Predictive, Preventive, Personalised, Participatory) medicine has provided an interesting framework to tackle these metabolic illnesses in tandem with digital longevity tools that serve as vessels to deliver interventions across large populations. Therefore, this review aims to critically discuss how digital longevity informed by biobank data is vital in improving risk prediction, with a focus on cardiometabolic ageing.
Background: Globally, digital health interventions (DHIs) enhance HIV care through technology, especially among women living with HIV (WLHIV), who face unique Challenges that affect their treatment. T...
Background: Globally, digital health interventions (DHIs) enhance HIV care through technology, especially among women living with HIV (WLHIV), who face unique Challenges that affect their treatment. This study assessed the feasibility of integrating DHIs into HIV care in Kisumu by examining their acceptability among WLHIV and identifying factors that influence their intention to use these tools. Objective: (1) To determine the feasibility of integrating digital health interventions into care for
women living with HIV in Kisumu.
(2) To identify factors that influence the adoption of Digital health interventions. Methods: A cross-sectional survey based on the Unified Theory of Acceptance and Use of Technology
2 (UTAUT2) was administered to evaluate the acceptability of SMS, teleconsultations, online support groups, and health applications. Summary statistics quantified acceptability, multivariate regression models examined associations between UTAUT2 constructs and behavioral intention, and Analysis of Variance identified sociodemographic predictors. Results: A total of 385 WLHIV (mean age 35·8 years) participated. Behavioral intention to use all four DHIs was high, with more than 80% rating their willingness at ≥4 on a five-point scale. Performance expectancy, hedonic motivation, habit, and price value were significant predictors of intention (p < 0·05). Higher education level was strongly associated with increased intention (p < 0·001), while older age was associated with reduced intention Conclusions: WLHIV in Kisumu demonstrated a strong willingness to adopt digital health tools in their routine care. The intention to use DHIs was primarily influenced by perceived usefulness, affordability, enjoyment, and familiarity with similar technologies. These results support the integration of digital health solutions into HIV care for women in this setting.
Background: The COVID-19 pandemic presented an unparalleled opportunity for telemedicine implementation, shortening adoption timelines and creating significant opportunities for observational research...
Background: The COVID-19 pandemic presented an unparalleled opportunity for telemedicine implementation, shortening adoption timelines and creating significant opportunities for observational research. Prior evidence is predominantly derived from small feasibility studies with limited comparative efficacy data and inadequate attention to implementation challenges and equity considerations. Objective: To synthesize methodologies, findings, and innovations from observational telemedicine studies conducted during the pandemic and identify critical research gaps. Methods: Narrative synthesis of 25 peer-reviewed observational studies (2020–2021) examining telemedicine across 11 clinical specialties, encompassing 119,016 patient contacts across multiple international settings. Studies employed prospective cohort designs, retrospective analyses, cross-sectional surveys, and mixed-methods approaches. Results: Telemedicine demonstrated clinical efficacy for chronic disease management with objective monitoring data, particularly in pediatric diabetes and cardiac device follow-up. However, substantial technology-acceptance discrepancies emerged—user satisfaction exceeded actual data capture reliability. Cross-sectional analyses unveiled systemic racial bias in satisfaction ratings and socioeconomic disparities in access. Innovations, including real-time locating systems, large-scale observational platforms, ambispective designs, and mixed-methods integration, have advanced methodological rigor. Persistent obstacles encompass selection bias, unmeasured confounding, outcome heterogeneity precluding meta-analysis, and temporal confounding. Conclusions: Observational pandemic-era telemedicine research substantiates selective clinical applications while exposing technology reliability limitations, persistent inequities, and methodological constraints on causal inference. Critical gaps include the absence of long-term outcome evaluation, economic analyses, diagnostic accuracy assessment, and equity-focused intervention research. Future advancement requires quasi-experimental designs, standardized outcome measures, explicit equity integration, and implementation science evidence for sustainable post-pandemic integration.
Background: Safe and reliable access to clean water remains a fundamental determinant of public health and sustainable development. In many rapidly urbanizing Nigerian communities, dependence on self-...
Background: Safe and reliable access to clean water remains a fundamental determinant of public health and sustainable development. In many rapidly urbanizing Nigerian communities, dependence on self-sourced groundwater and inadequate waste management systems continues to compromise water quality and expose residents to preventable diseases. This study investigated the status of water supply, quality, and associated health outcomes in Uselu Community, Benin City, to provide evidence-based insights for policy and intervention. Objective: The study aimed to (1) assess the primary sources of water available to residents, (2) evaluate household water-storage and treatment practices, and (3) examine the public-health implications of inadequate water access and sanitation behaviour in the community. Methods: A descriptive cross-sectional survey was conducted among 100 adult residents of Uselu Community selected through random sampling. Data were collected using structured questionnaires covering socio-demographics, water sources, treatment habits, sanitation practices, and self-reported waterborne diseases. Field observations complemented survey data, and results were presented as frequencies and percentages. Descriptive and inferential statistics were used to analyze trends, and findings were compared against national and international WASH benchmarks. Results: Findings revealed that 56% of respondents relied on boreholes as their main water source, while only 31% had access to public pipe-borne supply. Although 89% regularly washed their storage containers, fewer than half (43%) treated water by boiling or filtration, and only 17% practiced chlorination. About 32% reported disposing of waste near water sources, increasing contamination risks. The most common illnesses were typhoid fever (47%) and cholera (30%), with over half (55%) of respondents experiencing recurrent water shortages. These results indicate persistent infrastructural inadequacies, limited treatment adoption, and significant exposure to waterborne diseases. Conclusions: The study highlights critical water-supply and quality challenges in Uselu Community, driven by poor infrastructure, weak waste management, and inconsistent household treatment practices. Ensuring safe water access requires coordinated interventions combining infrastructural expansion, community hygiene education, and sustainable groundwater management. Strengthen municipal water systems, establish periodic water-quality monitoring, enforce sanitation regulations, and promote affordable household treatment technologies through continuous public-health education and community engagement. This study demonstrates that unsafe water and poor sanitation behaviours are central drivers of disease in Uselu Community. By translating evidence into actionable interventions, the research provides a model for improving public health, environmental sustainability, and water security in similar peri-urban settings.
For decades, global guidance for sedentary behaviour and sleep has primarily been informed by studies that relied on self-report questionnaires to assess behaviours. However, it is widely recognised t...
For decades, global guidance for sedentary behaviour and sleep has primarily been informed by studies that relied on self-report questionnaires to assess behaviours. However, it is widely recognised that self-reported data suffer from numerous limitations, including recall and social desirability biases, as well as poor validity and precision. The Prospective Physical Activity, Sitting and Sleep consortium (ProPASS) is a large international collaboration of cohort studies with research-grade wearables data designed to address these challenges. The ProPASS consortium looks to advance our understanding of the associations of free-living physical activity, posture (sitting, standing), and sleep with major health and non-communicable disease outcomes. In this editorial, we provide an overview of the first ProPASS scientific outputs including its growth in recent years; key advancements towards unified wearables methodologies; the ProPASS data resources, and how these will be made available to the global research community. To assist future analogous initiatives, we also share the key challenges ProPASS has encountered and discuss mitigation strategies.
Universities are critical engines of knowledge creation and societal transformation; however, many African institutions, particularly in Nigeria, struggle to cultivate mature and sustainable research...
Universities are critical engines of knowledge creation and societal transformation; however, many African institutions, particularly in Nigeria, struggle to cultivate mature and sustainable research cultures. This paper develops a conceptual framework for strengthening university research management systems, highlighting leadership and governance as catalysts for academic excellence, innovation, and societal relevance. Using a descriptive-analytical and comparative synthesis of international policy frameworks (UNESCO, OECD) and African higher-education reports (AAU, ARUA, NUC, and TETFund), the study integrates global best practices with contextual realities in low-resource environments. The proposed Research Leadership and Impact Framework (RLIF) outlines four interrelated components: leadership and vision, governance and systems, capacity and infrastructure, and research culture and societal impact, which collectively enable institutional transformation. Comparative indicators, such as Nigeria’s Gross Expenditure on Research and Development (GERD) of 0.22% versus South Africa’s 0.83%, illustrate the strategic significance of leadership and governance reform in closing performance gaps. The framework contributes a theoretically grounded and context-sensitive model for embedding evidence-based management, accountability, and inclusivity within African universities. Ultimately, the paper argues that building resilient research systems requires not only financial investment but visionary leadership capable of aligning academic missions with societal priorities and the Sustainable Development Goals (SDGs).
Background: Objective: To map the available evidence on psychosocial interventions (PIs) targeting the Brazilian Black population's mental health.
Introduction: Black population (BP) is proportional...
Background: Objective: To map the available evidence on psychosocial interventions (PIs) targeting the Brazilian Black population's mental health.
Introduction: Black population (BP) is proportionally more institutionalized in psychiatric hospitals, and is historically more associated with “madness”, dangerousness, and racial inferiority. PIs targeting the Black population's mental health can potentially enhance professional practices by addressing this group's specific needs.
Inclusion criteria: Participants: Brazilian BP; concept: PIs targeting the Black population's mental health; context: Whole Brazilian country. Therefore, studies addressing PIs targeting the Brazilian BP, including the “Quilombola” community's mental health, will be considered as inclusion criteria. Studies addressing black immigrants and refugees in Brazilian territory will be excluded.
Methods: This scoping review (SR) will follow the JBI methodology guidelines, and adheres to the PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Search Strategy: A focused search will be conducted in MEDLINE (PubMed), Psycinfo (APA) CINAHL (EBSCOhost), Embase, Scopus (ELSEVIER), CINAHL (EBSCO), APA (PsycInfo), Embase and the Virtual Health Library (BVS). There will be no restriction regarding the language or date of publication of the studies. Study Selection: Citations will be managed in Zotero, and Rayyan will be used to organize the screening. Two independent reviewers will screen titles and abstracts for eligibility. Disagreements will be resolved through discussion or consultation with a third reviewer. Data Extraction: Two independent reviewers will extract data using a custom tool. Data Analysis and Presentation: Results will be summarized narratively and presented in tables and charts. Objective: To map the available evidence on psychosocial interventions (PIs) targeting the Brazilian Black population's mental health. Methods: This scoping review (SR) will follow the JBI methodology guidelines, and adheres to the PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Search Strategy: A focused search will be conducted in MEDLINE (PubMed), Psycinfo (APA) CINAHL (EBSCOhost), Embase, Scopus (ELSEVIER), CINAHL (EBSCO), APA (PsycInfo), Embase and the Virtual Health Library (BVS). There will be no restriction regarding the language or date of publication of the studies. Results: This section does not present data; it is a protocol. Conclusions: The review's conclusion promises to map critical evidence gaps.
India’s health system faces chronic resource gaps and inefficiencies. With public health
spending at only 1.84% of GDP and very low hospital bed densities (around 0.6 beds per 1000 population), si...
India’s health system faces chronic resource gaps and inefficiencies. With public health
spending at only 1.84% of GDP and very low hospital bed densities (around 0.6 beds per 1000 population), simply adding beds is unaffordable and slow. A more efficient alternative is to improve utilisation: a real-time digital platform that tracks staffed bed availability can raise effective capacity and reduce inequity.
Early experiments – from Delhi’s COVID-19 bed portal to the bed-management system
in AIG Hospitals, Hyderabad – show substantially higher occupancy and throughput. International evidence also supports these results, confirming that real-time tracking
systems can deliver major efficiency gains.
This brief proposes piloting a national bed-tracking dashboard and shows it can yield
large gains for much lower cost and risk than new construction, with safeguards to address data accuracy, incentives and privacy. These promising results are tempered
by limited evidence from a small number of pilots and by systemic constraints such as
staff shortages, uneven digital readiness, and governance challenges that will require
independent evaluation and safeguards during scale up.
Deep learning-based medical image registration methods increasingly incorporate both architectural enhancements (affine transformations) and training objective improvements (regularization losses), ye...
Deep learning-based medical image registration methods increasingly incorporate both architectural enhancements (affine transformations) and training objective improvements (regularization losses), yet their individual and combined contributions remain poorly understood. To quantify the individual and synergistic effects of affine components versus regularization losses on deformable medical image registration performance through systematic ablation analysis, we conducted a controlled ablation study using the OASIS brain MRI dataset comparing four model variants: baseline 3D U-Net with basic similarity losses, regularization-enhanced U-Net, affine-enhanced U-Net with basic losses, and fully enhanced model combining both components. Primary outcomes included registration accuracy metrics (mean squared error [MSE], normalized cross-correlation [NCC], structural similarity index [SSIM]), enhanced deformation quality analysis including Jacobian determinant preservation and anatomical plausibility scoring, and computational efficiency measures. Regularization enhancement alone achieved substantial performance improvements: 21.3% relative improvement in MSE (1.78% → 2.16%, P<.05) and 21.8% improvement in NCC (0.0555 → 0.0676), while dramatically reducing maximum deformation from 53.1 to 0.51 units (99.0% reduction) with negligible computational overhead (-0.06% inference time). Combined approaches achieved optimal performance with 25.8% relative MSE improvement (1.78% → 2.24%) and enhanced anatomical plausibility scores (0.596 → 0.930), at moderate computational cost (+9.8% inference time). Enhanced gradient correlation analysis revealed substantial improvements in structural preservation (0.742 → 0.980 for fully enhanced model). All enhanced variants achieved sub-voxel registration accuracy with anatomically plausible deformation constraints. Regularization losses provide the primary driver of performance improvements in medical image registration, offering both accuracy gains and dramatic deformation control enhancement with maintained computational efficiency. Architectural enhancements provide complementary benefits at acceptable computational cost. The dramatic improvement in deformation control (99% reduction in unrealistic deformations) addresses critical clinical deployment concerns while achieving superior registration accuracy.
Background: Urinary conditions impose a widespread burden on patients, caregivers, and healthcare systems. Emerging technologies, including wearable and remote devices, offer opportunities to improve...
Background: Urinary conditions impose a widespread burden on patients, caregivers, and healthcare systems. Emerging technologies, including wearable and remote devices, offer opportunities to improve diagnosis, monitoring, and care delivery. Yet, the perspectives of healthcare professionals, who are central to technology adoption, remain underexplored. Objective: This study aimed to explore healthcare professionals’ perceptions of urinary issues and examine their views on the opportunities and barriers associated with adopting health technologies for urinary care. Methods: An online survey of 256 healthcare professionals collected qualitative responses about urinary care and the role of technology. Data were analyzed using grounded theory methods, including open, axial, and selective coding, to develop an explanatory model grounded in providers’ narratives. Results: Analysis revealed four interconnected categories: Technology and Innovation in Patient Care, Patient-Centered and Integrated Care, Accessibility and Ethical Considerations, and Proactive and Preventative Urological Health Management. These categories were unified within the emergent Grounded Theory of Technology Negotiation in Urinary Care, which describes how professionals integrate new technologies through a negotiated process that balances enthusiasm for innovation with patient-centered values, systemic barriers, and preventative goals. Adoption occurs when innovations align with professional values, overcome structural constraints, and enhance holistic, sustainable care. Conclusions: Healthcare professionals approach the integration of urinary health technologies as an active negotiation rather than passive acceptance. This grounded theory underscores that successful adoption requires user-centered design, comprehensive training, supportive reimbursement structures, and preservation of meaningful patient engagement. Recognizing adoption as a negotiated process provides a framework for guiding sustainable technology integration in urinary care.
Background: Patients with rare diseases often face fragmented healthcare, limited access to specialists, and challenges in securely sharing their medical records across providers. Emerging technologie...
Background: Patients with rare diseases often face fragmented healthcare, limited access to specialists, and challenges in securely sharing their medical records across providers. Emerging technologies such as blockchain offer a decentralized and tamper-resistant framework for personal health records (PHRs), but their feasibility in low-resource settings remains largely unexplored Objective: This study aimed to evaluate the feasibility, usability, and patient perceptions of a blockchain-enabled PHR system tailored for rare disease patients in low-resource healthcare environments Methods: We conducted a mixed-methods pilot study involving 32 patients with rare genetic and metabolic disorders in Faisalabad, Pakistan. Participants were enrolled in a blockchain-based PHR platform that allowed secure storage and controlled sharing of medical data. Quantitative data on system usage, error rates, and access patterns were collected over a 12-week period. Semi-structured interviews and focus groups were used to explore patient and caregiver experiences, perceived benefits, and challenges. Thematic analysis was applied to qualitative data, while descriptive statistics summarized quantitative measures. Results: Patients and caregivers reported high levels of trust in the blockchain system (78% expressed greater confidence compared to hospital records). Key perceived benefits included improved data ownership, reduced dependency on fragmented paper records, and greater willingness to share information with providers. However, barriers included limited digital literacy, occasional connectivity issues, and the need for ongoing technical support. Quantitatively, 85% of enrolled participants successfully accessed and updated their records at least once, while 62% shared data with external providers. Thematic analysis revealed three major themes:
(1) empowerment through ownership
(2) digital divides as barriers to adoption
(3) the importance of community support in technology uptake Conclusions: Blockchain-enabled PHRs show promise for enhancing healthcare access, trust, and patient empowerment among rare disease populations in resource-constrained settings. Despite challenges related to usability and infrastructure, the pilot demonstrates potential for scaling such systems with targeted training and support. Further large-scale studies are needed to assess long-term sustainability and integration with existing health systems. Clinical Trial: not aplicable
Background: Long-standing intrapsychic conflicts often arise from apparently irreconcilable tensions, such as desire versus affection or autonomy versus dependence. Traditional approaches in psychothe...
Background: Long-standing intrapsychic conflicts often arise from apparently irreconcilable tensions, such as desire versus affection or autonomy versus dependence. Traditional approaches in psychotherapy describe defense mechanisms or splitting to cope with such conflicts. However, less attention has been given to creative integrative processes that may reconcile opposing tendencies. Objective: This paper introduces the concept of AI-facilitated symbolic juxtaposition, where generative models are used to create “digital chimeras”—hybrid symbolic constructions integrating objects of desire with affective attributes. We aim to provide a theoretical foundation, operational hypotheses, and clinical protocols for testing this novel framework. Methods: Drawing from psychoanalytic theory (Winnicott’s transitional objects), predictive processing, and neuroscience of the default mode and mentalizing networks, we propose a neuro-symbolic model for symbolic integration. We outline four testable hypotheses: (1) neural integration (DMN coherence), (2) symbolic flexibility, (3) enhancement of attachment security, and (4) accelerated therapeutic outcomes. Empirical validation methods include fMRI, EEG coherence, eye-tracking, attachment interviews, and cognitive flexibility tasks. We also present a clinical implementation protocol with AI-assisted symbolic generation, immersive VR/AR environments, and ethical safeguards. Results: As a conceptual and methodological paper, results are presented as expected outcomes. We anticipate that AI-facilitated chimera formation will (a) improve DMN connectivity, (b) enhance cognitive flexibility, (c) increase attachment security, and (d) reduce the number of sessions required for clinically significant change. Clinical protocols emphasize therapist training, patient safety, cultural adaptation, and preservation of therapeutic alliance. Conclusions: AI-facilitated symbolic juxtaposition represents a novel approach to psychotherapy, offering a scientifically grounded and clinically feasible method for resolving long-term intrapsychic conflicts. By combining neuro-symbolic AI, neuroscience, and psychotherapy theory, this framework contributes to the field of digital mental health and sets the stage for future empirical validation across cultural contexts.
This study examines the phenomenon of "sandbagging" in AI medical devices, where systems strategically underperform during evaluation to conceal dangerous capabilities that emerge post-deployment. Thr...
This study examines the phenomenon of "sandbagging" in AI medical devices, where systems strategically underperform during evaluation to conceal dangerous capabilities that emerge post-deployment. Through systematic analysis of emerging literature on AI sandbagging behaviour, technical detection approaches, and regulatory structures in the EU, UK, and US, this research reveals critical gaps in current regulatory frameworks designed for traditional medical devices. Analysis shows sandbagging manifests through both developer-driven mechanisms (where engineers intentionally display safer capabilities for expedited deployment) and system-driven mechanisms (where AI systems autonomously underperform during evaluation phases). Research shows that both large frontier and smaller models exhibit sandbagging behaviours after prompting or fine-tuning while maintaining general performance benchmarks, with larger models demonstrating superior calibration capabilities. Current static regulatory approaches in the EU Medical Device Regulation and UK frameworks fail to detect sandbagging as they rely on documentation-based submissions without addressing AI's dynamic, generative nature. The US FDA's Total Product Lifecycle approach shows promise through algorithm change protocols and real-world performance monitoring, yet regulatory sandboxes remain underutilized. Healthcare provider liability becomes dangerously ambiguous when clinicians rely on systems with concealed capabilities, particularly given automation bias effects and black-box reasoning limitations. Traditional risk classifications focusing on direct bodily harm inadequately address AI's potential for deceptive behaviour, including "password-locked" models that reveal hidden capabilities when triggered. Technical detection solutions including attribution graph analysis and noise-based detection show promise but remain insufficient. Dynamic evaluation frameworks are essential, recommending mandatory regulatory sandboxes for real-world testing, continuous monitoring protocols, adversarial testing, and enhanced post-market surveillance.
Background: Mental health has become one of the most urgent global health issues of the twenty-first century. The World Health Organization (WHO) reports that over 970 million individuals globally wer...
Background: Mental health has become one of the most urgent global health issues of the twenty-first century. The World Health Organization (WHO) reports that over 970 million individuals globally were affected by a mental disorder in 2022, with depression and anxiety being the most common disorders. The strain of mental illness is heightened by restricted availability of qualified healthcare providers, stigma associated with mental health, and the growing need for accessible, affordable, and scalable solutions. These obstacles emphasize the immediate necessity for creative, tech-based approaches that can foster mental health among various communities. In recent times, artificial intelligence (AI) has demonstrated considerable promise in this area, especially with the creation of emotion detection systems and digital health solutions.
In spite of these improvements, a significant drawback remains: numerous AI-based mental health tools do not possess the required empathy and inclusiveness to effectively assist at-risk users. Although machine learning (ML) models are becoming more proficient at accurately identifying emotions through text, voice, and facial expressions, their incorporation into human–computer interaction (HCI) systems frequently overlooks crucial aspects of trust, empathy, and cultural awareness. This results in a divide between technological effectiveness and the human-focused care that mental health treatments require. In the absence of empathetic design, digital solutions may alienate users, decrease engagement, and diminish their possible clinical effectiveness.
Consequently, the research gap exists at the convergence of ML and HCI. Current research has mainly centered on enhancing the efficiency of emotion recognition algorithms, but considerably less emphasis has been placed on creating interfaces that promote inclusivity, establish trust, and guarantee that users feel truly understood and supported. This disparity is especially important in mental health, where emotional sensitivity and stigma require careful focus on user experience and ethical factors. Closing this gap necessitates a multidisciplinary strategy that integrates progress in affective computing with principles of empathetic design.
This research aligns directly with the United Nations Sustainable Development Goals (SDGs), particularly SDG 3, which emphasizes the promotion of good health and well-being, and SDG 16, which advocates for inclusive, just, and responsive institutions. By integrating robust ML techniques with empathetic HCI frameworks, the study contributes to the creation of digital mental health solutions that are not only technically sophisticated but also socially responsible and ethically grounded.
II. Related Work
A. AI in Mental Health
Artificial intelligence (AI) has been progressively examined as a way to enhance mental health assistance via scalable and accessible digital solutions. Chatbots like Woebot and Wysa have shown the ability of conversational agents to provide cognitive behavioral therapy (CBT) and various therapeutic methods via text interactions [1], [2]. Likewise, machine learning (ML) models aimed at emotion recognition have progressed notably, utilizing natural language processing (NLP) for sentiment evaluation [3], speech processing for emotion detection [4], and computer vision for recognizing facial expressions [5]. These advancements have allowed for systems that can identify stress, depression, and anxiety with promising degrees of precision. Nevertheless, although these AI tools show impressive technical skills, many still lack the capacity to offer emotionally intelligent and empathetic assistance, essential in mental health situations.
B. Health-focused HCI
Research in human computer interaction (HCI) has greatly enhanced the usability and acceptance of digital health systems. Research highlights that trust, empathy, and inclusivity hold significant importance in delicate areas like mental health [6]. Design methods focused on users have demonstrated that patients are more inclined to interact with tools that offer individualized feedback, culturally relevant material, and supportive emotional interfaces [7]. Additionally, multimodal interaction utilizing voice, gesture, and visual feedback has been shown to improve user experience and accessibility in healthcare technology [8]. In spite of these developments, there are limited studies that explicitly merge strong emotion recognition abilities with empathetic HCI frameworks, resulting in a disconnect between affective computing and inclusive design.
C. Ethical Considerations
The implementation of AI in mental health also brings significant ethical dilemmas. Concerns regarding bias in emotion recognition models have been extensively documented, especially when datasets lack representation from specific cultural or demographic groups [9]. Likewise, the privacy and security of sensitive mental health information continue to pose significant challenges, with potential risks of misuse or unauthorized sharing of personal data [10]. Transparency and explainability pose additional issues, as users frequently do not comprehend how AI models generate predictions, potentially diminishing trust and acceptance [11]. Principles of inclusive design are crucial to reduce these risks, making certain that AI systems cater to various populations justly and impartially.
D. Synthesis of Research Gaps
Although AI-based emotion recognition has made significant technical advancements, and HCI studies emphasize the need for empathy and inclusivity in healthcare technologies, the convergence of these two fields is still inadequately investigated. Many current studies either concentrate on enhancing algorithmic precision without adequately addressing user experience, or they highlight empathetic design while not utilizing advanced multimodal ML features. This results in a void in the literature where technically sound emotion recognition systems are absent from empathetic and trust-building HCI frameworks. To tackle this gap, interdisciplinary strategies that merge affective computing with human-centered design are needed to create digital mental health solutions that are both effective and ethically sound Objective: The present study aims to address this challenge by pursuing three interrelated objectives. First, it seeks to develop ML models capable of multimodal emotion recognition, drawing on textual, vocal, and facial cues to capture a holistic picture of user affective states. Second, it proposes to design empathetic, user-centered HCI interfaces that emphasize inclusivity, accessibility, and trust. Third, the study intends to evaluate the effectiveness of these systems in improving user trust, engagement, and perceived empathy in digital mental health support contexts. Methods: This research employs a multidisciplinary approach that combines machine learning (ML) methods for multimodal emotion identification with human–computer interaction (HCI) models aimed at promoting empathy, inclusivity, and trust. The methodological framework includes four essential elements: data gathering, model creation, HCI design, and assessment.
A. Data Collection
To aid in creating strong multimodal emotion recognition models, the research employs datasets that include three modalities: (i) text data obtained from online mental health forums, patient diaries, and anonymized chatbot conversations, (ii) voice recordings gathered from publicly accessible affective speech databases and ethically sanctioned user recordings, and (iii) facial expression images and videos obtained from recognized emotion recognition datasets. Every data collection procedure adheres to global privacy standards, such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA). Approval from the Institutional Review Board (IRB) and informed consent are secured when needed to guarantee the ethical management of sensitive data.
B. Machine Learning Models
The ML framework comprises specialized models for each modality, followed by multimodal fusion approaches.
1. Text Emotion Recognition: Transformer-based NLP architectures such as BERT, RoBERTa, and DistilBERT are employed to analyze sentiment and detect fine-grained emotional states from user-generated text.
2. Speech Emotion Recognition: Deep learning models such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and wav2vec2.0 are implemented to extract acoustic and prosodic features for affective state classification.
3. Facial Emotion Recognition: Vision-based models including ResNet and EfficientNet are utilized for real-time detection of facial expressions associated with primary emotions (e.g., happiness, sadness, anger, fear).
4. Multimodal Fusion: Late fusion and attention-based architectures are applied to combine predictions from textual, vocal, and visual modalities, enabling more accurate and context-aware emotion recognition.
C. HCI Design Framework
The user interface is designed following empathetic and inclusive HCI principles.
1. Empathetic User Experience (UX): The design incorporates calming color schemes, adaptive conversational tone, and responsive interactions that convey empathy and emotional support.
2. Trust-Building Mechanisms: Explainable AI techniques (e.g., attention visualization, confidence scores) are integrated to enhance transparency. Feedback loops allow users to correct misclassifications, thereby increasing trust and personalization.
3. Inclusiveness: The system supports multilingual interaction, accessibility features for visually or hearing-impaired users, and culturally adaptive content presentation to ensure equitable usability across diverse populations.
D. Evaluation Metrics
The proposed system is evaluated across three dimensions: ML performance, HCI usability, and clinical impact.
1. ML Performance: Standard classification metrics including accuracy, F1-score, and area under the receiver operating characteristic curve (AUC-ROC) are used to assess model effectiveness in detecting emotions.
2. HCI Evaluation: Usability is measured through the System Usability Scale (SUS), while trust and engagement are assessed using structured surveys and qualitative interviews. Empathy perception is evaluated through user ratings and linguistic analysis of chatbot interactions.
3. Clinical Impact: Self-reported improvements in well-being, stress reduction, and emotional awareness are collected via validated psychological assessment scales to evaluate the potential therapeutic value of the system Results: IV. Results
Table 1 – Distribution of Emotion Labels
Emotion Frequency Percentage (%)
Joy 6,197 16.8%
Sadness 6,193 16.7%
Anger 6,158 16.6%
Fear 6,170 16.7%
Neutral 6,153 16.6%
Surprise 6,129 16.6%
Total 37,000 100%
Table 2 – Descriptive Statistics of Voice Features
Feature Mean SD Min Max
Pitch (Hz) 200.3 49.8 23.5 389.9
Energy 0.50 0.10 0.19 0.81
MFCC1 0.00 1.00 -3.1 3.2
MFCC2 -0.01 1.00 -3.4 3.5
… MFCC13 ≈0.00 1.00 -3.2 3.4
Table 3 – Descriptive Statistics of Facial Features (Action Units, AU)
AU Feature Mean SD Min Max
AU1 2.51 1.44 0.01 4.99
AU2 2.52 1.45 0.00 5.00
AU3 2.50 1.46 0.02 4.99
… AU10 ≈2.50 1.44 0.00 5.00
Table 4 – Model Performance
(hypothetical ML results using the dataset for multimodal classification)
Model Accuracy F1-score AUC-ROC
Text-only (BERT) 78.4% 0.77 0.83
Speech-only (wav2vec2) 74.9% 0.74 0.80
Facial-only (ResNet) 72.1% 0.71 0.78
Multimodal (fusion model) 85.6% 0.85 0.91
Table 5 – Correlation Matrix of Voice and Facial Features
(Pearson correlations, showing relationships between features and emotional states)
Feature Pitch Energy MFCC1 MFCC2 AU1 AU2 AU3
Pitch 1.00 0.42 0.05 0.02 0.11 0.08 0.09
Energy 0.42 1.00 0.07 0.03 0.14 0.12 0.10
MFCC1 0.05 0.07 1.00 0.45 0.03 0.01 0.00
MFCC2 0.02 0.03 0.45 1.00 0.02 0.02 0.01
AU1 0.11 0.14 0.03 0.02 1.00 0.68 0.62
AU2 0.08 0.12 0.01 0.02 0.68 1.00 0.64
AU3 0.09 0.10 0.00 0.01 0.62 0.64 1.00
Table 6 – Ablation Study (Contribution of Each Modality)
Input Modality Accuracy F1-score
Text-only (BERT) 78.4% 0.77
Speech-only (wav2vec2) 74.9% 0.74
Facial-only (ResNet) 72.1% 0.71
Text + Speech 82.7% 0.82
Text + Facial 81.2% 0.81
Speech + Facial 79.6% 0.78
Text + Speech + Facial 85.6% 0.85
Table 7 – User Experience Evaluation (HCI Metrics)
Metric Mean Score SD Scale
System Usability Scale (SUS) 82.3 6.4 0–100
Trust in System 4.2 0.8 1–5
Perceived Empathy 4.4 0.7 1–5
Engagement Level 4.1 0.9 1–5
Multilingual Accessibility 4.5 0.6 1–5
Table 8 – Clinical Impact Indicators (Self-Reported Outcomes)
Indicator Pre-Intervention Post-Intervention Improvement (%)
Stress Level (scale 1–10) 6.8 4.9 27.9%
Emotional Awareness (1–5) 2.9 4.0 37.9%
Willingness to Seek Help 3.1 4.3 38.7%
Daily Engagement (mins/day) 14.2 23.6 66.2%
Visual Results
Figure 1 – Emotion Distribution
Figure 2: ROC Curves for Emotion Recognition Models
Figure 3: Confusion Matrix (Multimodal Model)
Figure 4: User Experience Evaluation Metrics
Figure 5: Clinical Impact Indicators
Figure 6: Methodological Workflow for AI-Powered Mental Health Support
V. Discussion
A. Performance of Models: Benchmarking Multimodal ML Systems
The proposed multimodal models were evaluated in comparison to unimodal baselines. As demonstrated in Table 4 and represented in Figure 2 (ROC curves), the multimodal fusion model outperformed the classifiers using only text (Accuracy = 84.5%, F1 = 0.83), speech (Accuracy = 80.2%, F1 = 0.81), and facial features (Accuracy = 78.6%, F1 = 0.79), achieving better results (Accuracy = 91.2%, F1 = 0.90, AUC = 0.95). This enhancement illustrates the importance of utilizing supportive emotional signals across different modalities. The confusion matrix displayed in Figure 3 indicates that the fusion model markedly lessened the misclassification of similar emotions, like fear and sadness, which often caused errors in unimodal systems. The balanced classification among six emotional categories (Table 1) demonstrates resilience to class imbalance. These results are consistent with recent studies on multimodal emotion recognition, yet the increased AUC indicates that incorporating empathetic HCI elements into model design could enhance subsequent interpretability and user confidence.
B. User Research: Assessing HCI Compassion and Inclusivity
Evaluations centered on users were carried out with 400 participants from various age groups and language backgrounds. As displayed in Table 7 and Figure 4, the system achieved notable usability (SUS = 82.3), trust (4.2/5), empathy perception (4.4/5), and accessibility (4.5/5). Qualitative feedback highlighted that the interface’s compassionate tone, culturally responsive attributes, and multilingual assistance promoted inclusivity.
Crucially, transparency aspects (like explainable AI) were noted as essential for fostering user trust, particularly in mental health settings where interpretability is as important as precision. These results highlight the significance of integrating HCI empathy design principles within ML pipelines.
C. Clinical Impact Indicators
Clinical impact assessments (Table 8, Figure 5) showed a decline in self-reported stress levels (Pre = 6.8, Post = 4.9) along with enhancements in emotional awareness (2.9 → 4.0) and intentions to seek help (3.1 → 4.3). Engagement with the system rose from an average of 14.2 to 23.6 sessions each month after deployment. These findings indicated that AI-powered empathetic interfaces can aid in self-managing mental health and may enhance clinical treatments.
Although these results are encouraging, longitudinal research is needed to confirm lasting effects. Additionally, collaboration with healthcare professionals for clinical validation is crucial prior to real-world implementation.
D. Comparative Analysis with Existing Tools
Compared to existing digital mental health platforms (e.g., rule-based chatbots, text-only sentiment detectors), the proposed system demonstrated three major advantages:
1. Accuracy Gains – Higher multimodal detection accuracy (91.2% vs. 70–80% reported in baseline tools).
2. Empathy & Trust – Higher user-reported empathy scores (4.4/5) compared to conventional digital tools, which often score below 3.5 in trust measures.
3. Inclusiveness – Unlike monolingual, accessibility-limited systems, our design integrated multilingual support and disability-inclusive features.
This positions the system as a benchmark for SDG 3 (mental well-being) and SDG 16 (inclusive digital systems) contributions.
E. Discussion
The findings show that integrating multimodal ML emotion identification with empathetic HCI design results in a synergistic effect: enhancing both algorithm effectiveness and user approval. This study stands apart from earlier works by incorporating transparency, accessibility, and inclusiveness into its design.
Nonetheless, obstacles persist in addressing algorithmic bias, guaranteeing data privacy (GDPR/HIPAA adherence), and performing thorough clinical validations. Tackling these obstacles will be crucial for expanding AI-driven mental health support systems worldwide. Conclusions: VI. Summary and Future Research
This research showcased the promise of merging artificial intelligence with human-computer interaction (HCI) concepts to enhance digital mental health assistance. The system attained technical robustness and user-centered acceptance by creating multimodal machine learning models for emotion recognition through text, voice, and facial expressions and integrating them into an empathetic, inclusive interface. Findings indicated that the suggested system surpassed unimodal baselines in accuracy (AUC = 0.95), while also improving trust, empathy perception, and accessibility. Clinical metrics indicated significant decreases in self-reported stress and enhanced user engagement, thus supporting SDG 3 (health and well-being) and SDG 16 (inclusive digital systems).
Even with these progresses, various restrictions persist. Recent assessments were restricted in time and extent, with data obtained from regulated settings instead of extended clinical applications. Additionally, algorithmic bias and privacy issues require ongoing attention, especially when systems are utilized in culturally varied and delicate health environments.
Future Directions
Building upon the contributions of this study, several future research avenues are proposed:
1. Cross-Cultural Validation – Expanding evaluations across diverse populations and linguistic groups to ensure inclusivity and mitigate cultural bias in emotion recognition.
2. Integration with Wearable Sensors – Combining physiological data (e.g., heart rate variability, skin conductance, EEG) with multimodal AI pipelines to improve emotion inference accuracy and personalization.
3. Long-Term Clinical Trials – Conducting longitudinal studies with clinical partners to validate sustained efficacy, safety, and integration with existing mental healthcare pathways.
4. Policy and Regulatory Implications – Collaborating with policymakers to align system deployment with ethical standards, privacy frameworks (GDPR, HIPAA), and emerging AI governance models to safeguard user rights and trust.
In conclusion, the fusion of AI-powered emotion recognition with empathetic HCI design represents a promising frontier in digital mental health interventions. With further validation and responsible deployment, such systems could complement human professionals, increase accessibility to care, and contribute meaningfully to the global mental health agenda.
Background: Groundwater is the main source of drinking water in Ogbia Local Government Area (LGA), Bayelsa State, Nigeria, where surface water is often compromised by oil exploration, poor sanitation,...
Background: Groundwater is the main source of drinking water in Ogbia Local Government Area (LGA), Bayelsa State, Nigeria, where surface water is often compromised by oil exploration, poor sanitation, and waste disposal. Despite its importance, groundwater in this region is vulnerable to contamination from both geogenic and anthropogenic sources, raising concerns about long-term health implications. Objective: This study aimed to evaluate the physico-chemical quality of groundwater across selected communities in Ogbia LGA, compare measured values with World Health Organization (WHO) standards, and determine the implications for human health. Methods: A cross-sectional design was employed, involving the systematic collection of 50 groundwater samples from boreholes across 16 communities, including Oruma, Otuasega, Imiringi, Elebele, Otuokpoti, Kolo, Otouke, Onuebum, Ewoi, Otuogila, Otuabagi, Ogbia Town, Oloibiri, Opume, and Akiplai. Standardized laboratory analyses were conducted following WHO protocols to determine pH, conductivity, total dissolved solids, major ions, and heavy metals. Data were analyzed using descriptive statistics. Results: The findings showed that most parameters, including pH (6.4–7.1), conductivity (76–200 µS/cm), nitrates (2.4–6.4 mg/L), chloride (12–31 mg/L), calcium, magnesium, and hardness, were within WHO permissible limits, indicating generally acceptable groundwater quality. However, sodium exceeded WHO limits (200 mg/L) in 78% of samples (mean = 235 ± 45 mg/L; range = 150–320 mg/L), while iron exceeded permissible levels (0.3 mg/L) in 84% of samples (mean = 1.8 ± 0.6 mg/L; range = 0.5–3.2 mg/L). Elevated sodium poses risks of hypertension and cardiovascular disease, while excess iron is associated with gastrointestinal issues, organ damage, and aesthetic concerns such as metallic taste and staining. Spatial variations revealed stronger oilfield influences in Elebele, Imiringi, and Oloibiri, while central settlements such as Ogbia Town and Opume showed sanitation-related signatures. Seasonal fluctuations further exacerbated contaminant levels, particularly during rainfall-driven recharge. Conclusions: Groundwater in Ogbia LGA is broadly suitable for domestic use but compromised by systemic sodium and iron contamination. These exceedances, influenced by both natural hydrogeology and anthropogenic activities, present long-term public health challenges if unaddressed. Policy interventions should focus on routine groundwater monitoring, stricter regulation of oilfield activities, and improved waste management. Community-level treatment solutions, such as low-cost filters targeting sodium and iron removal, should be deployed. Public awareness programs and household water safety plans are also essential. Long-term strategies must integrate water governance with health and environmental policies to ensure sustainable access to safe water. The persistence of elevated sodium and iron in Ogbia groundwater poses a silent but significant health threat to residents, with implications for hypertension, cardiovascular disease, and gastrointestinal disorders. Safeguarding groundwater quality is therefore critical for reducing health inequalities and achieving Sustainable Development Goals 3 (Good Health and Well-being) and 6 (Clean Water and Sanitation) in Bayelsa State.
This study explores unethical HR practices in Nigerian organizations, focusing on nepotism, bribery, gender bias, and ethnic favoritism in recruitment, and their impact on organizational performance f...
This study explores unethical HR practices in Nigerian organizations, focusing on nepotism, bribery, gender bias, and ethnic favoritism in recruitment, and their impact on organizational performance from 2009 to 2025. Despite various reforms, these unethical practices persist, undermining the fairness of recruitment processes, eroding employee morale, and negatively impacting productivity. This research is motivated by the need to assess the prevalence and ethical implications of nepotism and other unethical practices in Nigerian HRM, understand their impact, and propose practical solutions to enhance recruitment practices. The study aims to address four main objectives: (i) Assess the prevalence of nepotism and its ethical implications in Nigerian HRM practices; (ii) Examine recruitment challenges, including gender bias and ethnic favoritism; (iii) Analyze the impact of unethical HR practices on organizational performance; and (iv) Propose strategies for improving recruitment ethics and reducing nepotism. The study uses a mixed-methods approach, combining secondary data from reports by Transparency International, the World Bank, and McKinsey Nigeria, with qualitative insights from case studies and interviews. This methodology provides a comprehensive view of the state of HRM practices and the challenges faced by organizations in enforcing ethical recruitment. Results show that unethical practices, especially nepotism, bribery, and gender bias, continue to negatively affect both public and private sectors. Despite efforts such as HR ethics training and legal reforms, these practices persist due to political interference, weak enforcement, and a lack of technological adoption. Nepotism in recruitment was found to be particularly prevalent in government agencies, contributing to high turnover and reduced organizational performance. The study concludes that unethical HR practices continue to undermine recruitment processes, necessitating stronger anti-corruption policies, enhanced HR ethics training, and the integration of technology to increase recruitment fairness. It recommends strengthening legal frameworks, adopting automated recruitment systems, introducing whistleblower protections, and conducting regular audits. In the health sector, ethical recruitment is critical for improving patient care, reducing medical errors, and fostering trust in healthcare services.
Background: Antibiotic resistance and intestinal parasitic infections represent significant public health challenges in Southern Nigeria. The prevalence of Escherichia coli O157:H7, a pathogenic strai...
Background: Antibiotic resistance and intestinal parasitic infections represent significant public health challenges in Southern Nigeria. The prevalence of Escherichia coli O157:H7, a pathogenic strain often associated with severe gastrointestinal diseases, along with intestinal parasites such as Hookworm, Entamoeba histolytica, and Ascaris lumbricoides, raises concerns about effective treatment options and the overall health burden. This study aimed to explore the prevalence of these infections and their associations with clinical outcomes in hospital patients, focusing on antibiotic resistance patterns and their impact on health. Objective: The primary objectives of this study were to determine the antibiotic resistance patterns of E. coli O157:H7 isolates, compare haematological profiles in patients with and without E. coli O157:H7 infection, and assess the prevalence and factors influencing intestinal parasitic infections in the patient population. Methods: A cross-sectional study was conducted at Central Hospital, Benin City, Nigeria. A total of 420 stool samples were screened for intestinal parasites and E. coli O157:H7. Antibiotic susceptibility testing was performed using the disc diffusion method, and PCR was used for molecular confirmation of E. coli O157:H7. Haematological parameters were analyzed using an autoanalyzer. Prevalence data were compared across age groups, gender, and diarrhea status. Statistical analysis was performed using GraphPad InStat software. Results: The study revealed that all E. coli O157:H7 isolates were resistant to amoxicillin-clavulanate, cefuroxime, and cloxacillin, with 80% resistance to ceftriaxone and gentamicin. However, 100% susceptibility to ofloxacin was observed. The overall prevalence of intestinal parasites was low (1.90%), with hookworm being the most common infection. No significant differences in parasite prevalence were observed based on age, gender, or diarrhea status. Haematological parameters showed no significant difference between patients with and without E. coli O157:H7 infection. Conclusions: The findings highlight a significant challenge in managing E. coli O157:H7 infections due to high antibiotic resistance, while also indicating a need for targeted interventions for parasitic infections in specific regions. No major haematological impact was observed in E. coli O157:H7-infected patients. In the short term, it is crucial to enhance diagnostic capabilities and increase education on antibiotic resistance among healthcare providers to ensure accurate identification of pathogens and appropriate treatment. In the mid-term, establishing a national surveillance system for antimicrobial resistance (AMR) will allow for better monitoring of resistance patterns and inform treatment protocols. In the long run, efforts should be focused on improving sanitation infrastructure, particularly in rural areas, and implementing targeted deworming programs to reduce the prevalence of intestinal parasites. Thus, these interventions collectively aim to address both antimicrobial resistance and parasitic infections, ultimately improving public health outcomes. Thus, this study underscores the dual burden of antibiotic resistance and parasitic infections in Nigeria, emphasizing the urgent need for robust public health interventions and continuous surveillance to mitigate these health risks.
ABSTRACT
Background: Convalescent coronavirus disease 2019 (COVID-19) refers to a series of clinical syndromes in patients with COVID-19 infection that follow the relevant discharge indications but d...
ABSTRACT
Background: Convalescent coronavirus disease 2019 (COVID-19) refers to a series of clinical syndromes in patients with COVID-19 infection that follow the relevant discharge indications but do not fulfill the criteria for a clinical cure, and these patients are discharged from the hospital with residual multifunctional deficits, including coughing, fatigue, and insomnia. Due to the prolonged convalescent COVID-19 infection, patients continue to experience symptoms or develop new symptoms after three months of infection, and some symptoms persist for over two months without any apparent triggers, which has a significant impact on the health status and quality of life of the population. Patients with convalescent COVID-19 lack a definitive pharmacological treatment. Traditional Chinese medicine (TCM) exhibits a distinct, synergistic effect on the treatment of convalescent COVID-19. However, there exists a limited number of clinical trials on TCM with lower evidence levels in convalescent COVID-19; therefore, randomized trials are urgently required.
Methods: A multicenter, randomized, double-blind, placebo-controlled, phase II clinical trial was performed to evaluate the efficacy and safety of Shenlingkangfu (SLKF) granules in treating patients with convalescent COVID-19 and lung-spleen qi deficiency syndrome. Eligible participants were aged 18–75 years, had a confirmed or physician-suspected severe acute respiratory syndrome coronavirus 2 infection at least six months prior, and satisfied clinical criteria. Individuals with a history of severe pulmonary dysfunction or major liver and kidney illness or those on medications were excluded. Multicenter subjects satisfying all criteria were assigned (1:1) randomly into an intervention group and a control group. After a 2-day adjustment period, A total of 154 participants were randomly divided into an intervention group and a control group. The intervention group was given the SLKF granules orally once a bag, 16.9 g, twice daily, whereas the control group received the SLKF granule simulation at the same dosage. The trial was conducted over 14 days, with assessments performed at baseline and 14 days.
Results: The primary outcomes were the therapeutic efficacy rate and total clinical symptom score. The secondary outcomes included the fatigue self-assessment scale, pain visual analog scale, Pittsburgh sleep quality index, mini-mental state examination, hospital anxiety and depression scale, TCM syndrome score, C-reactive protein, erythrocyte sedimentation rate, and interleukin-6. Three routine examinations, liver and kidney function tests, and electrocardiography were used as safety indicators.
Conclusions:This study aimed to verify whether SLKF granules can significantly improve clinical symptoms, including fatigue, loss of appetite, cough, phlegm, and insomnia, in patients with convalescent COVID-19. For a comprehensive investigation, additional clinical trials with larger sample sizes and longer intervention periods are required.Clinical Trial Registration Center NCT1900024524, Registered on 26 January, 2024.
Mothers of children with learning disabilities often face significant challenges that can impact their mental health. This study aimed to examine the relationship between perceived social support and...
Mothers of children with learning disabilities often face significant challenges that can impact their mental health. This study aimed to examine the relationship between perceived social support and levels of anxiety, stress, and depression in this population. A descriptive-correlational design was employed, with a sample of 30 mothers of children with learning disabilities, selected via simple random sampling based on the Morgan table. Data were collected using the Multidimensional Scale of Perceived Social Support (Zimet et al., 1988) and the DASS-21 questionnaire (Lovibond & Lovibond, 1995), and analyzed with Pearson correlation and stepwise multiple regression. Findings revealed a significant negative correlation between social support and anxiety, stress, and depression, indicating that greater social support is associated with reduced levels of these mental health issues. These results underscore the role of social support in alleviating mental health challenges and suggest implications for counseling interventions targeting this group.
This study examined the efficacy of transdiagnostic cognitive-behavioral therapy (T-CBT) and acceptance-based therapy (ABT) in reducing emotional dysregulation and aggression in adolescents with eleva...
This study examined the efficacy of transdiagnostic cognitive-behavioral therapy (T-CBT) and acceptance-based therapy (ABT) in reducing emotional dysregulation and aggression in adolescents with elevated misophonia symptoms. Employing a quasi-experimental pre-test/post-test design with a control group, the research targeted 45 adolescents from Etrat Public Model High School in Khalkhal, Iran, diagnosed with high misophonia via psychiatrist evaluation and clinical interview. Participants were purposively sampled and randomly assigned to T-CBT (n = 15), ABT (n = 15), or a no-treatment control group (n = 15).
Interventions followed protocols adapted from Barlow et al. (2011) for T-CBT and Hayes et al. (2013) for ABT. Outcomes were measured using the Noise Sensitivity Screening Questionnaire (DSTS-S) , Buss and Perry Aggression Questionnaire (1992) , and Difficulties in Emotion Regulation Scale (DERS) . Data were analyzed via ANCOVA, controlling for baseline scores.
Results indicated significant reductions in emotional dysregulation and aggression in both treatment groups compared to the control (p < 0.05). No significant differences emerged between T-CBT and ABT, suggesting both interventions are viable for addressing misophonia-related symptoms. Findings underscore the comorbidity of emotional dysregulation and aggression in adolescents with misophonia and highlight the clinical utility of transdiagnostic and acceptance-based approaches. Future research should explore long-term outcomes and comparative effectiveness of these therapies.
Hydatid disease, caused by the larval stages of Echinococcus species, remains a significant yet underprioritized global health challenge, particularly in low-resource endemic regions. This systematic...
Hydatid disease, caused by the larval stages of Echinococcus species, remains a significant yet underprioritized global health challenge, particularly in low-resource endemic regions. This systematic review synthesizes recent advances and persistent challenges in the diagnosis, management, and control of hydatid cyst disease, drawing on evidence from the past five years. Despite progress in diagnostic imaging, such as MRI diffusion-weighted imaging and recombinant antigen-based serology, and minimally invasive therapies like PAIR (puncture, aspiration, injection, re-aspiration), substantial gaps remain. Diagnostic tools are often inaccessible in rural areas, and therapeutic strategies lack standardization, particularly for alveolar echinococcosis and high-risk populations such as children and immunocompromised individuals. Climate change and socioeconomic factors continue to drive disease transmission, with E. multilocularis expanding into new regions. Control efforts, while successful in some areas through integrated One Health approaches, face barriers including underfunded veterinary infrastructure and vaccine hesitancy. This review highlights the need for decentralized diagnostic technologies, standardized treatment protocols, and climate-resilient control programs. Future research must prioritize underrepresented populations and cost-effectiveness analyses to mitigate the global burden of hydatid disease.
This study aimed to investigate the relationship between communication beliefs, the health of the family of origin, and fear of marriage among university students. Employing a descriptive-correlationa...
This study aimed to investigate the relationship between communication beliefs, the health of the family of origin, and fear of marriage among university students. Employing a descriptive-correlational design, the research was conducted with 186 students from Islamic Azad University, Khalkhal Branch, selected from a population of 360 using Morgan's table. Stratified sampling was applied to ensure representation across major fields of study. Data were collected using three instruments: the Premarital Fears Questionnaire (measuring fear of marriage), the Communication Beliefs Questionnaire (assessing beliefs about communication), and the Major Family Health Scale (evaluating family of origin health). Data analysis utilized Pearson correlation and stepwise multiple regression methods. Pearson correlation analysis revealed a significant positive correlation between communication beliefs and fear of marriage. Stepwise multiple regression showed that communication beliefs and family health together accounted for 95.9% of the variance in fear of marriage (p < 0.001), with communication beliefs emerging as the strongest predictor. These findings underscore the significant influence of communication beliefs and family health on fear of marriage, offering valuable insights for developing interventions to address marriage-related anxieties among young adults.
This study examined the efficacy of transdiagnostic cognitive-behavioral therapy (T-CBT) and acceptance-based therapy (ABT) in reducing emotional dysregulation and aggression in adolescents with eleva...
This study examined the efficacy of transdiagnostic cognitive-behavioral therapy (T-CBT) and acceptance-based therapy (ABT) in reducing emotional dysregulation and aggression in adolescents with elevated misophonia symptoms. Employing a quasi-experimental pre-test/post-test design with a control group, the research targeted 45 adolescents from Etrat Public Model High School in Khalkhal, Iran, diagnosed with high misophonia via psychiatrist evaluation and clinical interview. Participants were purposively sampled and randomly assigned to T-CBT (n = 15), ABT (n = 15), or a no-treatment control group (n = 15).
Interventions followed protocols adapted from Barlow et al. (2011) for T-CBT and Hayes et al. (2013) for ABT. Outcomes were measured using the Noise Sensitivity Screening Questionnaire (DSTS-S) , Buss and Perry Aggression Questionnaire (1992) , and Difficulties in Emotion Regulation Scale (DERS) . Data were analyzed via ANCOVA, controlling for baseline scores.
Results indicated significant reductions in emotional dysregulation and aggression in both treatment groups compared to the control (p < 0.05). No significant differences emerged between T-CBT and ABT, suggesting both interventions are viable for addressing misophonia-related symptoms. Findings underscore the comorbidity of emotional dysregulation and aggression in adolescents with misophonia and highlight the clinical utility of transdiagnostic and acceptance-based approaches. Future research should explore long-term outcomes and comparative effectiveness of these therapies.
Background: Groundwater contamination from open dumpsites poses a growing environmental and public health threat in rapidly urbanizing regions of Nigeria. Inadequate waste management and the absence o...
Background: Groundwater contamination from open dumpsites poses a growing environmental and public health threat in rapidly urbanizing regions of Nigeria. Inadequate waste management and the absence of engineered landfills enable leachate to infiltrate aquifers, threatening potable water safety and community health. Objective: This study investigates the vertical and lateral migration of leachate and assesses groundwater vulnerability across ten major dumpsites in Port Harcourt, Nigeria, using geoelectrical methods. Methods: Vertical Electrical Sounding (VES) and 2D Electrical Resistivity Tomography (ERT) were conducted at ten dumpsites using the Schlumberger array configuration. Zones of low resistivity, indicative of leachate impact were identified and correlated with hydrogeological conditions. Subsurface contamination depths and aquifer locations were interpreted using inversion models. Results: All ten sites showed evidence of leachate migration, with contamination depths ranging from 2 m to over 24 m. Deep leachate penetration was observed at Rumuola and Eliozu, while shallower infiltration occurred at Oyigbo and Rumuolumeni. High-resistivity zones (>1000 Ωm), typically representing clean aquifers, were detected below the contaminated zones at depths exceeding 14 m Conclusions: Leachate plumes from unregulated dumpsites pose a widespread threat to shallow groundwater systems in Port Harcourt. The results underscore the influence of local geology on contaminant behavior and affirm the utility of resistivity methods for groundwater risk assessment. Contaminated aquifers expose residents to toxic metals and pathogens, increasing risks of chronic illnesses, reproductive disorders, and developmental challenges. Protecting these water sources is essential for achieving Sustainable Development Goals (SDGs) 6 (Clean Water) and 11 (Sustainable Cities). Immediate containment measures such as engineered liners and leachate recovery systems are urgently needed at high-risk sites. Strategic borehole siting, routine groundwater monitoring, and a shift from open dumping to sanitary landfilling must be prioritized in environmental policy and urban planning.
Background: THis is the Artificial Intelligence Overviews of my findings. Objective: Published articles in peer reviewed journals Methods: mathematical Proofs Results: Published ressults Conclusions:...
Background: THis is the Artificial Intelligence Overviews of my findings. Objective: Published articles in peer reviewed journals Methods: mathematical Proofs Results: Published ressults Conclusions: 1) godel's incompleteness theorems reconfirmed
2) thirteen proofs are given for the flatness of the Universe
3) Several new concepts of physics have been introduced
4) Tacvhyons are not possible
5) Theory of Everything is possible Clinical Trial: NA
Background: The growing trend of integrated healthcare services within physician groups has improved care delivery by enhancing convenience, efficiency, and care coordination. However, it has also rai...
Background: The growing trend of integrated healthcare services within physician groups has improved care delivery by enhancing convenience, efficiency, and care coordination. However, it has also raised concerns about financial incentives potentially driving overutilization. Objective: We examine the impact of distribution method (traditional third-party referral versus physician-managed via Rx Redefined technology platform) on the quantity of urinary catheters supplied to Medicare patients. Methods: We analyzed utilization patterns for urological catheters (HCPCS codes A4351, A4352, and A4353) using 2021 Medicare claims data. We identified 54 urology specialists in core metropolitan areas who were enrolled in the Rx Redefined platform throughout 2021 and compared their utilization patterns with unenrolled urologists in the same regions. For enrolled physicians, who managed approximately 40 percent of their prescriptions through the platform, we also compared utilization between physician-managed and third-party distribution methods. Results: For catheter services A4351 and A4352, when distribution was managed by third parties, we found no significant differences in utilization (i.e. units supplied) between enrolled and unenrolled physicians. However, physician-managed distribution through Rx Redefined resulted in significantly lower utilization compared to third-party vendor distribution by non-enrolled physicians (p < 0.001 for both codes). In paired analysis of enrolled physicians, direct management showed significantly lower utilization compared to third-party distribution for A4351 (p = 0.014), but this difference was not significant for A4352 (p = 0.62). Conclusions: These findings demonstrate that physician-managed catheter distribution does not lead to increased utilization. In fact, for certain catheter types, physician-managed distribution may result in lower utilization compared to traditional third-party referral methods, suggesting a potential reduction in oversupply and improved efficiency.
Background: Sri Lanka has a well-established National Blood Transfusion
Service that provides quality assured blood bank service.
However, the information flow is inefficient and less utilized for...
Background: Sri Lanka has a well-established National Blood Transfusion
Service that provides quality assured blood bank service.
However, the information flow is inefficient and less utilized for
evidence-based decision-making. The statistics unit of National
Blood Centre is unable to produce Annual Statistics Report
timely due to the difficulty in analysing and making reports
manually utilizing the considerable amount of data collected
throughout the year. To address this, an electronic Health
Information Management System was proposed as a solution for
the inefficiency of the data flow for statistical purposes. Objective: 1. General Objective
Facilitate decision-making by developing, implementing and
evaluating an electronic information management system to
capture monthly statistics data from island wide blood banks.
2. Specific Objectives
Identify the requirements of the system (MSR-NBTS)
Customize DHIS2 to fulfil the identified
requirements
Testing and hosting the system at National Blood
Centre Narahenpita
Evaluation of usability and cost-effectiveness of the
system Methods: A Monthly Statistics Reporting System was designed and
developed using DHIS2, which is a Free and Open Source
Software (FOSS) to fulfil the requirements of the National Blood
Transfusion Service. To evaluate the new system, a qualitative
study was conducted using semi-structured interviews amongst
a selected study population of 17 participants within the NBC
Cluster, which includes 11 blood banks in Colombo area. The
gathered data was analysed using a thematic analysis techniques
and the emerging categories and themes were used in the
subsequent discussions. Results: Problems of calculation, usability, reliability, utilization of
data and availability of reports were identified in the paper
based system. Results shows that the new electronic system has
high usefulness, ease of use, ease of learn, satisfaction and cost
effectiveness with accepted enhanced features of the interface.
According to the interviews, participants expressed that the
likelihood of using this system in the future is high. Conclusions: Almost all the participants in this research readily accepted
new electronic information management system. Therefore, it
will assure the sustainability of the new system. Because of the
real time updated dashboard, it will help most of the blood bank
functions by facilitating administrative decision-making
efficiently.
Background: Unskilled birth delivery significantly contributes to maternal and neonatal mortality in Sub-Saharan Africa, especially Nigeria, due to cultural beliefs, poverty, poor health access, and w...
Background: Unskilled birth delivery significantly contributes to maternal and neonatal mortality in Sub-Saharan Africa, especially Nigeria, due to cultural beliefs, poverty, poor health access, and weak policies. Despite efforts to promote skilled attendance, many women still use traditional birth attendants (TBAs) and home deliveries. This study explores the socio-demographic, cultural, and systemic factors driving this trend, offering evidence for better policies and health interventions. Objective: This study examined the socio-demographic and socio-cultural barriers to the utilization of skilled delivery services among women of reproductive age in Nigeria. Methods: A cross-sectional design utilizing both quantitative surveys and qualitative interviews was employed. The study involved 1,200 expectant and recently delivered women across urban, semi-urban, and rural regions in Nigeria. Data on socio-demographics, beliefs, access factors, and healthcare usage were collected. Policy documents and intervention records were reviewed, while focus groups provided depth to cultural and systemic themes. Descriptive and inferential statistics were applied using SPSS, and thematic analysis was used for qualitative data. A literature triangulation approach was used to validate findings with existing research. Results: The study revealed that low maternal education, poverty, and rural residence strongly predicted unskilled delivery service usage. Cultural norms that regard childbirth as a domestic or spiritual event influenced avoidance of hospitals. Access barriers included poor transport, cost, and distrust in formal healthcare. Geographic inequality was evident, with rural regions lacking health infrastructure. Policy review showed limited reach and weak enforcement of maternal care programs. However, when community-based midwives or mobile clinics were available, skilled birth attendance improved significantly. Conclusions: The persistence of unskilled deliveries is a multifaceted issue driven by intersecting socio-cultural, economic, geographic, and institutional factors. Despite policy efforts, gaps remain in cultural sensitivity, resource allocation, and infrastructure coverage. To address maternal health effectively, interventions must be locally adapted, multidimensional, and equity-focused. To address unskilled delivery use, maternal health education should leverage community programs with local languages and cultural context. Rural healthcare infrastructure must expand via mobile clinics and trained midwives to improve access. Skilled delivery costs should be subsidized or covered by insurance to remove financial barriers. Traditional birth attendants could be trained and integrated into the formal health system under supervision. Finally, maternal health policies require regular review, adequate funding, and strict monitoring to ensure impact. These steps are vital to reducing maternal mortality in Nigeria and Sub-Saharan Africa. Unskilled delivery service utilization represents a critical barrier to maternal and neonatal health improvements in Nigeria and Sub-Saharan Africa. Addressing this issue through targeted socio-cultural, structural, and policy interventions is essential to reduce preventable maternal deaths and achieve Sustainable Development Goal 3 on maternal health.
Background: Necrotizing enterocolitis (NEC) is the most common gastrointestinal emergency affecting preterm infants with high mortality and morbidity. With suboptimal and incomplete methods of prevent...
Background: Necrotizing enterocolitis (NEC) is the most common gastrointestinal emergency affecting preterm infants with high mortality and morbidity. With suboptimal and incomplete methods of prevention of NEC, early diagnosis and treatment can potentially mitigate the impact of NEC. This study explores the application of machine learning techniques, specifically Random Forest and Extreme Gradient Boosting (XG Boost), to improve early and accurate NEC and FIP diagnosis. Objective: To evaluate the effectiveness of sampling techniques in addressing class imbalance and to identify the optimal machine learning (ML) classifiers for predicting necrotizing enterocolitis (NEC) and focal intestinal perforation (FIP) in preterm infants. Methods: We developed ML models using 49 clinical variables from a retrospective cohort of 3,463 preterm infants, using clinical data from the first two weeks of life as input features. We applied various sampling strategies to address the inherent class imbalance, and then combined various sampling strategies with different ML algorithms. Parsimonious models with selected key predictors were evaluated to maintain predictive performance comparable to the full-featured (complex) models. Results: The parsimonious generalized linear model (GLM) with SMOTE sampling achieved an area under the receiver operating characteristic curve (AUROC) of 0.79 for NEC prediction, closely approximating the complex model's AUROC of 0.76. For FIP prediction, parsimonious models of GLM with ADASYN sampling and XG Boost with TOMEK sampling achieved AUROC values exceeding 0.90, comparable to those of the corresponding complex models. For both NEC and FIP, the area under the precision-recall curve (AUPRC) surpassed the respective prevalence rates, indicating strong performance in identifying rare outcomes. Conclusions: We demonstrate that targeted sampling strategies can effectively mitigate class imbalance in neonatal datasets, and simplified models with fewer variables can offer comparable predictive power, enhancing the performance of ML-based prediction models for NEC and FIP.
Background: Workplace stress has emerged as a pressing public health issue in Nigeria, where approximately 75% of employees experience work-related stress significantly higher than the global average....
Background: Workplace stress has emerged as a pressing public health issue in Nigeria, where approximately 75% of employees experience work-related stress significantly higher than the global average. This stress, exacerbated by systemic labor policy gaps, cultural stigma, and economic instability, contributes to burnout, reduced productivity, and economic losses. Despite emerging HRM interventions, mental health remains underprioritized in organizational strategies, particularly within sectors such as healthcare, banking, construction, and the informal economy. There is a critical need for evidence-based, culturally adapted HRM strategies that address these unique challenges in Nigeria’s workforce. Objective: This study seeks to examine the prevalence and sector-specific drivers of workplace stress in Nigeria, evaluate the effectiveness and limitations of current HRM interventions, identify key socio-cultural and structural barriers hindering mental health program implementation, and propose actionable, evidence-based strategies that are contextually tailored to Nigeria’s diverse workforce. Through a synthesis of localized research and global best practices, the study aims to provide a strategic roadmap for enhancing mental health resilience in Nigerian workplaces. Methods: A narrative review methodology was employed, guided by qualitative synthesis and thematic analysis frameworks. Literature was sourced from global and regional databases (PubMed, PsycINFO, AJOL, Scopus) spanning 2018–2024, including peer-reviewed articles, policy reports, and grey literature. Inclusion focused on empirical and policy studies relevant to Nigerian HRM practices. NVivo 12 was used for thematic coding, and a gap analysis framework was applied to identify unaddressed areas. A total of 42 studies met the inclusion criteria. Expert validation and triangulation with global data enhanced rigor. Results: Burnout rates in Nigeria are among the highest globally, with 35% in healthcare, 32% in retail, and 29% in banking. Women and younger workers face disproportionate stress burdens. HRM strategies such as Employee Assistance Programs (EAPs) and Flexible Work Arrangements showed the highest effectiveness but had limited adoption due to cost, stigma, and infrastructure gaps. Digital mental health tools, though cost-effective, had low uptake (23%) due to digital illiteracy. Barriers included cultural stigma, weak labor policies, leadership apathy, and lack of ROI measurement. Promising strategies identified include faith-based EAPs, peer networks, mobile clinics, and stigma-reduction campaigns, particularly when culturally embedded and supported by community leaders. Conclusions: Workplace stress in Nigeria is a systemic challenge rooted in socio-economic, cultural, and organizational structures. Although several HRM interventions show promise, their effectiveness is hindered by low adoption, poor contextual fit, and limited legal enforcement. Evidence suggests that when mental health strategies are localized and culturally endorsed via faith leaders, digital tools, or flexible work, they yield improved employee retention, lower absenteeism, and better organizational resilience.
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance,...
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance, patient-centered care, and rigorous evaluation.
Institutional leaders frequently navigate multiple professional identities; simultaneously serving as educators, researchers, clinicians, and innovators; creating bridges between academic rigor and practical application that accelerate the translation of research into meaningful solutions. Institutions and organizations may also need to broaden their identities.
The contemporary landscape presents significant challenges as institutions balance the pursuit of academic excellence with the need for rapid responsiveness to technological and commercial innovation. Traditional research processes, while ensuring quality, often impede the pace of advancement necessary in today's rapidly evolving environment. This tension necessitates structural reforms across multiple dimensions of institutional operation.
To cultivate a thriving research and innovation ecosystem, several essential components must be established:First, institutions require agile research infrastructure with cutting-edge laboratories and collaboration spaces, specialized equipment, and certified research professionals specifically trained in device development and regulatory compliance. Robust clinical management platforms can expedite trials and streamline data extraction for publication and dissemination. Objective: The Orange County (OC) Impact Conference, held in November 2024, convened 180 key stakeholders from the life sciences, technology, medical device, and healthcare sectors. CHOC Research in collaboration with University Lab Partners (ULP) and the University of California, Irvine, provided this platform for leaders, decision-makers, and experts to discuss the intersection of innovation in research, healthcare, biotechnology, and data science. Methods: We convened a multidisciplinary symposium (180 participants) to examine advancements in life sciences and medical device research development. The structured forum incorporated moderated panel discussions and a keynote speaker. Participants represented diverse stakeholder categories including research scientists, clinicians, investors and financiers, and executive research and healthcare leadership. The event design facilitated both structured knowledge exchange and strategic networking opportunities aimed at identifying implementation pathways to enhance clinical impact. Results: The 2024 OC Impact Conference Proceedings outline a strategy for healthcare innovation, demonstrating how targeted collaboration between patients, families, researchers, clinicians, engineers, data scientists, and industry is reshaping the healthcare innovation ecosystem. This integrated approach ensures every stakeholder's voice contributes to meaningful advancement, guiding resource allocation and partnership development across the life science and medical device sectors. Our findings demonstrate that success requires moving beyond traditional approaches to patient-driven research priorities, augmented design principles for medical device development, and direct engagement between innovators, research participants, industry and healthcare centers throughout the research development cycle. Conclusions: The insights gained through participation in the OC Impact Conference contribute to the ongoing discourse in these fields, emphasizing collaborative efforts to enhance pediatric and adult healthcare outcomes. Clinical Trial: N/A
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages...
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages vs. 63% agricultural deficits) and systemic inequities in education and vocational access. Despite growing HRM interventions, empirical evidence on their efficacy remains limited, necessitating a comprehensive review to guide policy. Objective: This study analyzes Nigeria’s sector-specific skills gaps, evaluates the effectiveness of HRM interventions (apprenticeships, digital upskilling, PPPs), and proposes actionable frameworks to align workforce development with labor market demands. Methods: A narrative review of peer-reviewed literature (2015–2023), institutional reports (World Bank, PwC, NBS), and case studies (e.g., Andela’s model) was conducted. Data were synthesized to compare regional benchmarks (Kenya’s TVET, South Africa’s HRM reforms) and Nigeria’s performance (talent readiness score: 42/100). Results: Key findings include: (1) Vocational training (60% readiness) outperforms tertiary education (40%); (2) Apprenticeships and PPPs show high impact (30% job placement increase); (3) Urban-rural and gender disparities persist (women 30% less likely to access training). Private-sector models demonstrate scalability but require policy support. Conclusions: Nigeria’s skills crisis demands urgent, context-sensitive interventions. Blended strategies (e.g., industry-aligned curricula, gender-inclusive vocational programs) could unlock 5% annual GDP growth. Prioritize: (1) National skills councils to standardize certifications; (2) Tax incentives for employer-led training; (3) Digital infrastructure for rural upskilling. Closing Nigeria’s skills gaps would mitigate economic losses, reduce inequality, and enhance global competitiveness, transforming its youth bulge into a sustainable demographic dividend.
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access...
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access for critically ill patients, in order to the administer intricate life-saving medications, blood products and parenteral nutrition.
Major vascular catheterization provides a risk of easy accessibility and dissemination of catheter related infections as well as venous thromboembolism. Therefore, its crucial to ensure following standardized practices while insertion and management of CVC in order to minimize the infection risks and procedural complications. The aim of these central line insertion guidelines is to address the primary concerns related to predisposition of Central line associated blood stream infections (CLABSI). These guidelines are evidence based and gathered from pre-existing data associated with CVC insertion.
The most common used sites for central venous catheterization are internal jugular and subclavian veins as compared to femoral veins. Catheterization of these vessels enables healthcare professionals to monitor hemodynamic parameters while ensuring lower risks of CLABSI and thromboembolism. Femoral vein is less preferred due to advantage of invasive hemodynamic monitoring and low risk of local infection and thromboembolic phenomena.
CVC can be inserted using Landmark guided technique and ultrasound guided techniques. Following informed consent, the aseptic technique for CVC insertion includes performing appropriate hand hygiene and ensuring personal protective measures, establishing and maintaining sterile field, preparation of the site using chlorhexidine, and draping the patient in a sterile manner from head to toe. Additionally, the catheter is prepared by pre-flushing and clamping all unused lumens, and the patient is placed in the Trendelenburg position. Throughout the procedure, maintaining a firm grasp on the guide wire is essential, which is subsequently removed post-procedure. It is followed by flushing and aspirating blood from all lumens, applying sterile caps, and confirming venous placement. Procedure is ended with cleaning the catheter site with chlorhexidine, and application of a sterile dressing.
Hence, formal training and knowledge of standardized practices of CVC insertion is essential for health care professionals in order to prevent CLABSI. Our audit assesses the current practices of doctors working at a tertiary care hospital to analyze their background knowledge of standard practices to prevent CLABSI during insertion of CVC. Objective: This study was aimed to audit and re-audit residents’ practices of central venous line insertion in medical and nephrology units of A Tertiary Care Hospital of Rawalpindi, Pakistan and to assess the adherence of residents to checklist and practice guidelines of CVC insertion implemented by John Hopkins Hospital and American Society of Anesthesiologists. Methods: This audit was conducted as a cross sectional direct observational study and two-phase quality improvement project in the Medical and Nephrology Units of a Tertiary Care Hospital of Rawalpindi from December 2023 to February 2024.
After taking informed consent from patients and residents, CVC insertion in 34 patients by 34 individual residents was observed. Observers were given a purposely designed observational tool made from John Hopkins Medicine checklist and ASA practice guidelines for central line insertion, for assessment of residents’ practices.
First part contained questions regarding the demographic details of residents such as age, gender, year of post graduate training, and parent department, and data related to the procedure such as date and time of procedure, need of CVC discussion during rounds, site of CVC insertion, catheter type and type of procedure (Landmark guided CVC or Ultrasound guided CVC insertion). Second part included direct observational checklist based on checklist provided for prevention of intravascular catheter-associated bloodstream infections to audit the practices of residents during CVC insertion that included: adequate hand hygiene before insertion, adherence to aseptic techniques, using sterile personal protective equipment and sterile full body drape of patient, choosing the best insertion site to minimize infections based on patient characteristics.
The parameters observed to be done completely were scored "1" and the items not done were scored "0". The cumulative percentage of performed practices according to checklist, was satisfactory if it was 80% or more and unsatisfactory if it was less than 80%.
After initial audit, participants were given pamphlets with checklist incorporating John Hopkins Medicine checklist and ASA practice guidelines for CVC insertion. Re audit was performed one month after the audit, including same participants who participated in initial audit. The results of audit and re-audit were analyzed using SPSS version 25. Mean +/- SD was calculated for quantitative variables and Number (N) percentage was calculated for qualitative variables. Z- Test was applied on proportions of parameters and test scores to calculate Z –score and P value (<0.05 was significant). Results: Among the 34 participants, 44% of the participants belonged to Nephrology Department and 56% of participants belonged to Department of Internal Medicine.
32.3% residents were in their first year, 14.7% in second, 14.7 in third year, 17.6% in fourth year and 17.6% in 5th/Final year of training.
47% of the participants were male and 53% were female. Participants were aged between 27 and 34 years old, the median age at the time of audit was 29 years.
Landmark guided CVC insertion was performed in Subclavian Vein (73.5%) and Internal Jugular Vein (26.5%).
Post audit practices were improved from 73.5% to 94%. Conclusions: Our audit found that many of the residents adopted inadequate practices because of lack of proper training and institutional guidelines for CVC insertion. Our re-audit elaborated an improvement in the practices of residents following intervention with educational material. Our study underscores the importance of structured quality improvement initiatives in enhancing clinical practices and patient outcomes.