Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
JMIR Preprints
A preprint server for pre-publication/pre-peer-review preprints intended for community review as well as ahead-of-print (accepted) manuscripts
Background: Transcatheter aortic valve implantation (TAVI) has become a potential treatment modality for symptomatic patients with severe aortic stenosis (AS) across all surgical risk profiles. However, peri-procedural stroke remains a persistent and serious complication with significant implications for patient outcomes and healthcare systems. Reported incidence ranges between 2-7%. As the benefit of cerebral protection devices and the optimal antithrombotic regime following TAVI remain unclear, understanding contemporary risks and timings of stroke are important in order to tailor peri/post procedural stroke risk reduction strategies. Objective: To evaluate the incidence, timing, and predictors of stroke and transient ischaemic attack (TIA) within 30 days post TAVI in a contemporary real-world all-comers registry. Methods: Consecutive patients undergoing TAVI (n=980) between January 2020 and February 2024 were included in this retrospective study. A stroke diagnosis was made based on the Valve Academic Research Consortium-2 (VARC-2), defined as a focal or global neurological deficit >24 hours or <24 hours if haemorrhage or infarct was found on neuroimaging. TIA was defined as the duration of a focal or global neurological deficit <24 hours. Those with documented evidence of stroke or TIA were sub-divided into acute (<24 hours post procedure) and subacute (1-30 days post procedure). Patients from outside our catchment area were excluded (n=46) due to the lack of access to patient records. Two patients were excluded as no valve was deployed. Results: A total of 932 patients (41% female, mean age 81.6±6.9 years) were included in the study. TAVI was performed for severe AS in the context of degenerative calcific disease of native valves in 94% (n=873), 6% (n=57) of TAVIs were valve-in-valve procedures, and only one patient was treated for severe stenosis of a congenital (Bicuspid) aortic valve. 84% (n=779) had no prior history of stroke and 26% had a history of diabetes mellitus. 60% (n=555) of patients were in sinus rhythm prior to TAVI, 35% (n=326) were in atrial fibrillation or flutter and 5% (n=51) were in a paced rhythm. Self-expanding valves were implanted in 58% (n=542) of cases and Balloon-expanding valves were used in the remainder. The majority of cases were performed transfemorally (96%). Pre-dilatation balloon aortic valvuloplasty was performed in 16% (n=150) of cases and the median procedure time was 76 mins [IQR 66.0, 89.0]. Vascular closure device successfully achieved haemostasis in 94% (n=877) of procedures. The thirty-day incidence of stroke/TIA was 3.2% (n=30), with 35% (n=11) occurring within 24 hours and the majority occurring within the first 48 hours (58%, n=18). The median number of days of stroke/TIA post-TAVI was 1.0 [0.0, 3.0]. Most (80%, n=24) were ischaemic strokes and of these one had a haemorrhagic transformation. Diabetes is the only variable predictive of stroke at 30 days HR 2.14 (95% CI 1.01 – 4.56), p=.049 using logistic regression. Conclusions: Most cerebrovascular events occurred early post TAVI. Effective stroke prevention strategies, including optimized antithrombotic regimens and the role of cerebral protection devices, warrant further evaluation. Clinical Trial: n/a
Journal Description
JMIR Preprintscontains pre-publication/pre-peer-review preprints intended for community review (FAQ: What are Preprints?). For a list of all preprints under public review click here. The NIH and other organizations and societies encourage investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work. JMIR Publications facilitates this by allowing its authors to expose submitted manuscripts on its preprint server with a simple checkbox when submitting an article, and the preprint server is also open for non-JMIR authors.
With the exception of selected submissions to the JMIR family of journals (where the submitting author opted in for open peer-review, and which are displayed here as well for open peer-review), there is no editor assigning peer-reviewers.
Submissions are open for anybody to peer-review. Once two peer-review reports of reasonable quality have been received, we will send these peer-review reports to the author, and may offer transfer to a partner journal, which has its own editor or editorial board.
The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
If authors want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc) after peer-review, please specify this in the cover letter. Simply rank the journals and we will offer the peer-reviewed manuscript to these editors in the order of your ranking.
If authors do NOT wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter.
JMIR Preprints accepts manuscripts at no costs and without any formatting requirements (but if you intend the submission to be published eventually by a specific journal, it is of advantage to follow their instructions for authors). Authors may even take a WebCite snapshot of a blog post or "grey" online report. However, if the manuscript is already peer-reviewed and formally published elsewhere, please do NOT submit it here (this is a preprint server, not a postprint server!).
JMIR Preprints is a preprint server and "manuscript marketplace" with manuscripts that are intended for community review. Great manuscripts may be snatched up by participating journals which will make offers for publication.There are two pathways for manuscripts to appear here: 1) a submission to a JMIR or partner journal, where the author has checked the "open peer-review" checkbox, 2) Direct submissions to the preprint server.
For the latter, there is no editor assigning peer-reviewers, so authors are encouraged to nominate as many reviewers as possible, and set the setting to "open peer-review". Nominated peer-reviewers should be arms-length. It will also help to tweet about your submission or posting it on your homepage.
For pathway 2, once a sufficient number of reviews has been received (and they are reasonably positive), the manuscript and peer-review reports may be transferred to a partner journal (e.g. JMIR, i-JMR, JMIR Res Protoc, or other journals from participating publishers), whose editor may offer formal publication if the peer-review reports are addressed. The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
For pathway 2, if authors do not wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter. Also, note if you want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc), please specify this in the cover letter.
Manuscripts can be in any format. However, an abstract is required in all cases. We highly recommend to have the references in JMIR format (include a PMID) as then our system will automatically assign reviewers based on the references.
Background: Transcatheter aortic valve implantation (TAVI) has become a potential treatment modality for symptomatic patients with severe aortic stenosis (AS) across all surgical risk profiles. Howeve...
Background: Transcatheter aortic valve implantation (TAVI) has become a potential treatment modality for symptomatic patients with severe aortic stenosis (AS) across all surgical risk profiles. However, peri-procedural stroke remains a persistent and serious complication with significant implications for patient outcomes and healthcare systems. Reported incidence ranges between 2-7%. As the benefit of cerebral protection devices and the optimal antithrombotic regime following TAVI remain unclear, understanding contemporary risks and timings of stroke are important in order to tailor peri/post procedural stroke risk reduction strategies. Objective: To evaluate the incidence, timing, and predictors of stroke and transient ischaemic attack (TIA) within 30 days post TAVI in a contemporary real-world all-comers registry. Methods: Consecutive patients undergoing TAVI (n=980) between January 2020 and February 2024 were included in this retrospective study. A stroke diagnosis was made based on the Valve Academic Research Consortium-2 (VARC-2), defined as a focal or global neurological deficit >24 hours or <24 hours if haemorrhage or infarct was found on neuroimaging. TIA was defined as the duration of a focal or global neurological deficit <24 hours. Those with documented evidence of stroke or TIA were sub-divided into acute (<24 hours post procedure) and subacute (1-30 days post procedure). Patients from outside our catchment area were excluded (n=46) due to the lack of access to patient records. Two patients were excluded as no valve was deployed. Results: A total of 932 patients (41% female, mean age 81.6±6.9 years) were included in the study. TAVI was performed for severe AS in the context of degenerative calcific disease of native valves in 94% (n=873), 6% (n=57) of TAVIs were valve-in-valve procedures, and only one patient was treated for severe stenosis of a congenital (Bicuspid) aortic valve. 84% (n=779) had no prior history of stroke and 26% had a history of diabetes mellitus. 60% (n=555) of patients were in sinus rhythm prior to TAVI, 35% (n=326) were in atrial fibrillation or flutter and 5% (n=51) were in a paced rhythm. Self-expanding valves were implanted in 58% (n=542) of cases and Balloon-expanding valves were used in the remainder. The majority of cases were performed transfemorally (96%). Pre-dilatation balloon aortic valvuloplasty was performed in 16% (n=150) of cases and the median procedure time was 76 mins [IQR 66.0, 89.0]. Vascular closure device successfully achieved haemostasis in 94% (n=877) of procedures. The thirty-day incidence of stroke/TIA was 3.2% (n=30), with 35% (n=11) occurring within 24 hours and the majority occurring within the first 48 hours (58%, n=18). The median number of days of stroke/TIA post-TAVI was 1.0 [0.0, 3.0]. Most (80%, n=24) were ischaemic strokes and of these one had a haemorrhagic transformation. Diabetes is the only variable predictive of stroke at 30 days HR 2.14 (95% CI 1.01 – 4.56), p=.049 using logistic regression. Conclusions: Most cerebrovascular events occurred early post TAVI. Effective stroke prevention strategies, including optimized antithrombotic regimens and the role of cerebral protection devices, warrant further evaluation. Clinical Trial: n/a
Background: Traditional cancer patient education is limited by issues related to timeliness, accessibility, and personalization. Social media has emerged as a novel platform for cancer patient educati...
Background: Traditional cancer patient education is limited by issues related to timeliness, accessibility, and personalization. Social media has emerged as a novel platform for cancer patient education, attracting significant attention in recent years. However, a comprehensive assessment of the current research landscape in this domain is lacking. Objective: The aim was to identify research hotspots and trace the evolutionary trajectory of social media-based cancer patient education, and map leading journals, institutions, and global collaboration networks. Methods: Bibliometric tools, including VOSviewer, Bibliometrix, and CiteSpace, were utilized to analyze publication trends, author - and - institution collaboration networks, keyword co - occurrence, factor analysis, and thematic clusters. A total of 119 publications were retrieved from the Web of Science database, spanning the period from 2011 to 2025. Results: The Journal of Medical Internet Research has emerged as the preeminent journal in this field, boasting the highest publication volume. The University of Minnesota topped the list in terms of institutional productivity. The United States dominated the research landscape, with five of the top ten most productive institutions located in the U.S., and also led the international collaboration network. Keyword analysis revealed an evolution from social media-based cancer patient education toward more differentiated and interdisciplinary integration. Three distinct research phases were identified, along with five pivotal research themes: (1) cancer patient education across different social media platforms; (2) methods of cancer patient education and quality - of - life interventions via social media; (3) psychological and social support for cancer patients; (4) health education needs of specific patient groups; and (5) cancer information quality and misinformation detection on social media platforms. Future research should prioritize intelligent information governance and the development of precise education systems, technology - inclusive and differential - need - responsive strategies, and narrative therapy and interdisciplinary theoretical innovation to advance cancer patient education toward personalization, equity, and comprehensive coverage. Conclusions: This study represents the first bibliometric analysis of social media - based cancer patient education, providing actionable insights to optimize digital health literacy strategies and promote patient - centered, equitable healthcare.
Background: Smart healthcare systems are increasingly integrated into clinical practice to enhance efficiency and care quality. However, differences in perspectives between clinical users and system d...
Background: Smart healthcare systems are increasingly integrated into clinical practice to enhance efficiency and care quality. However, differences in perspectives between clinical users and system developers often result in perception gaps that hinder successful implementation. These misalignments can reduce user satisfaction, limit system reliance, and compromise overall adoption. A deeper understanding of these perceptual differences is crucial to improving health IT design and sustainability. Objective: This study aims to explore perception gaps between clinical users and system developers regarding key system success constructs in smart healthcare environments. Specifically, it investigates how these differences affect user satisfaction and dependence, and examines whether perceived risk moderates the relationship between information quality and system dependence. Methods: A cross-sectional survey was conducted in a regional hospital in Taiwan, with 289 valid responses collected—266 from clinical users and 23 from system developers. The survey instrument measured constructs from the Information System Success Model (ISSM), including system functionality, information quality, facilitating conditions, social influence, user dependence, user satisfaction, and perceived risk. Multiple regression and moderation analyses were employed to examine relationships among variables. Results: Significant perceptual differences were identified between clinical users and developers in terms of system functionality, information quality, user satisfaction, and perceived risk. Regression analyses revealed that system functionality, information quality, facilitating conditions, and user dependence were significant predictors of user satisfaction. Additionally, information quality and social influence positively influenced user dependence. Perceived risk was found to moderate the relationship between information quality and user dependence. Conclusions: The findings support the extended ISSM framework in the context of smart healthcare and underscore the importance of participatory system design, transparent communication regarding system risks, and institutional support. Addressing perceptual gaps between key stakeholders is essential for fostering trust, enhancing user engagement, and promoting the sustainable adoption of health IT systems in clinical settings.
Background: Emergency departments (EDs) routinely screen for fall risk, but patients are rarely notified of their results or referred to prevention resources often due to competing clinical demands. C...
Background: Emergency departments (EDs) routinely screen for fall risk, but patients are rarely notified of their results or referred to prevention resources often due to competing clinical demands. Chatbots can be used to provide patient education and community resources in a conversational, friendly manner that does not add to clinician workload. We developed and implemented an automated intervention using our health system’s artificial intelligence (AI) chatbot, Livi, to address this gap in fall prevention across 17 EDs. Objective: The objective of this study is to outline the process of developing a tool that automatically connects older ED patients who screen high risk for falls to fall prevention resources near their homes using an AI chatbot. Methods: We worked with electronic health record (EHR) and ED operations teams to embed a process that delivers a quick response (QR) code in the After Visit Summary of high-risk patients. Scanning the QR code launches a conversation with Livi, guiding users to evidence-based, free or low-cost fall prevention resources where they live. We conducted rapid, iterative usability testing of the Livi falls chatbot with community members (n=93) during the development process. Results: Rapid iterative testing led to enhancements such as increased font size, option for Spanish language, additional geographic locations for fall prevention resources, home modification resources, the ability to self-assess for fall risk, fall prevention tips, and the ability for patients to leave feedback on the Livi chatbot. Because all EDs in the health system use the same instance of Epic, the EHR workflow wasd eployed system-wide instantaneously. The use of a QR code linked with Livi also allows for rapid updating of prevention resources. Conclusions: This scalable, EHR-integrated intervention demonstrates a novel approach to improving population health by capitalizing on existing clinical workflows and automating both risk notification and personalized resource referral for older adults without increasing clinician burden.
Background: The COVID-19 pandemic has intensified mental health issues globally, highlighting the urgent need for remote mental health monitoring. Digital phenotyping using smart devices has emerged a...
Background: The COVID-19 pandemic has intensified mental health issues globally, highlighting the urgent need for remote mental health monitoring. Digital phenotyping using smart devices has emerged as a promising approach, but it remains unclear which features are essential for predicting depression and anxiety. Objective: This systematic review aimed to identify the types of features collected through smartphones, Actiwatch devices, smartbands, and smartwatches, and to determine which features are essential for mental health monitoring. Methods: A systematic review was conducted following the PRISMA 2020 guidelines. Searches were performed across Web of Science, PubMed, and Scopus on February 5, 2025. Inclusion criteria comprised quantitative studies involving adults (≥19 years) using smart devices to predict depression or anxiety based on passive data collection. Studies focusing solely on smartphones or qualitative designs were excluded. Risk of bias was assessed using the Mixed Methods Appraisal Tool (MMAT) and Quality Criteria Checklist (QCC). The results were synthesized descriptively. Results: From 1,382 records, 22 studies were included. These studies were conducted in various countries and involved diverse populations, from clinical patients to community samples. The most frequently utilized features were derived from accelerometer (ACC), such as step counts and activity levels, followed by heart rate (HR) features. Sleep-related features were also important, especially in studies using smartbands and smartwatches. However, features such as peripheral capillary oxygen saturation (SpO₂) and blood volume pulse (BVP) were rarely used. Smartphone-derived features (e.g., call logs, phone usage) were underutilized in smartwatch studies, likely due to data access restrictions. Conclusions: ACC and HR-derived features are essential in digital phenotyping for mood disorder prediction. Sleep features should be emphasized more, particularly in Actiwatch-based studies. Improving data accessibility and establishing standard reporting guidelines are crucial for advancing this field. Even with the various strengths of this study, variability in feature definitions, differences in study designs, and a lack of standardized reporting hindered direct comparisons across studies, making a meta-analysis infeasible. Clinical Trial: This review was registered in the Open Science Framework.
Background: With the risk prediction models based on machine learning(ML) for venous thromboembolism(VTE) in patients increasing, the quality and applicability of these models in practice and future...
Background: With the risk prediction models based on machine learning(ML) for venous thromboembolism(VTE) in patients increasing, the quality and applicability of these models in practice and future research remain unknown. How ML is predicting and how many factors are selected have been research hotspots in VTE prediction. Objective: To systematically review relevant literatures on the predictive value of machine learning for venous thromboembolism. Methods: A comprehensive literature search was conducted across multiple databases, including PubMed, Web of Science, MEDLINE, Embase, CINAHL and the Cochrane Library for relevant studies on predictive models for venous thromboembolism. The novel Prediction Model Risk of Bias Assessment Tool (PROBAST) was used to assess the risk of bias of the ML models. Sensitivity, specificity and area under the curve (AUC) were used to evaluate the performance of the prediction models to investigate the predictive value of machine learning for venous thromboembolism. Results: 27 studies were included in the systematic review. The pooled sensitivity, specificity, and area under the curve were 0.79 (95% CI 0.78-0.80), 0.82 (95% CI 0.81-0.82) and 0.8774, respectively. The studies were found to exhibit a considerable degree of bias, primarily due to shortcomings in the handling of missing data and the reporting of the study design. Age was used more often in prediction models. Random Forest (RF) was the superior ML model in predicting venous thromboembolism. Conclusions: It was effectively to predict venous thromboembolism in patients with machine learning, and may provide a reference for the development or updating of subsequent scoring systems. Clinical Trial: This systematic review and meta-analysis were conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines and registered with PROSPERO (CRD420251041604).
Background: Most people living with dementia experience behavioural and psychological symptoms of dementia (BPSD), leading to poor quality of life and hospitalisations, and causing a significant burde...
Background: Most people living with dementia experience behavioural and psychological symptoms of dementia (BPSD), leading to poor quality of life and hospitalisations, and causing a significant burden for informal caregivers and healthcare systems, with a global lack of support to manage these symptoms at home. Telephones can potentially improve the accessibility and flexibility of long-term dementia support. This review evaluates the effectiveness of telephone interventions in managing BPSD for community-dwelling patients with dementia and their informal caregivers.
Methods: A systematic search of four databases (MEDLINE, Embase, PsycInfo, and SCOPUS) was conducted. We included studies with telephone interventions with no blended component (i.e. other technologies or in-person part), and outcomes assessing the impact of these interventions on people with dementia, informal caregivers, and hospitalisations using quantitative measures.
Results: Of 4,355 studies screened, 12 met the inclusion criteria. Studies were conducted in five high-income countries, and the majority were randomised controlled trials (RCTs), with two non-RCTs and two pre-post intervention studies. Interventions included telephone coaching calls, psychosocial and educational support calls, and online platforms. Most studies showed an improvement in BPSD and BPSD-related burden, with four studies indicating significant improvements in BPSD-related caregiver burden. Nine studies reported reduced BPSD, and five out of 12 showed a statistically significant decrease in these symptoms. One study considered BPSD-related hospital admissions, reporting a statistically significant decrease in admission rates.
Conclusions: Telephone interventions delivered through psychosocial and educational calls and online platforms are promising tools for reducing BPSD-related caregiver burden. Personalised telephone interventions, including patients and informal caregivers in the treatment plan, may reduce BPSD severity and frequency. However, further research and application of the interventions in low- and middle-income countries with longer follow-up periods, including cost-effectiveness analyses, is required to establish the global generalisability of these interventions and inform future practice.
Background: Recent advances in large language models (LLMs) such as GPT-3/4 have spurred development of AI chatbots and advisory tools in medicine. These systems are posited to assist or augment physi...
Background: Recent advances in large language models (LLMs) such as GPT-3/4 have spurred development of AI chatbots and advisory tools in medicine. These systems are posited to assist or augment physician–patient communication, potentially improving empathy, clarity, and responsiveness. However, their actual impact on communication outcomes remains uncertain. Objective: To systematically review and meta-analyze peer-reviewed studies (2020–2025) evaluating how LLM-based interventions affect physician–patient communication, including empathy, clarity, trust, and patient understanding. Methods: Following PRISMA 2020 guidelines, we searched PubMed/MEDLINE for studies published from 2020 to 2025 examining LLM or chatbot applications in clinical communication contexts. Eligible designs included randomized, observational, cross-sectional, and qualitative studies. Two reviewers independently screened titles/abstracts, assessed full texts, and extracted data on study design, population, LLM type, communication measures, and outcomes. We conducted a qualitative synthesis and random-effects meta-analysis, reporting pooled standardized mean differences (SMD) or odds ratios (OR) with 95% confidence intervals (CI). Results: From 312 records, 10 studies (N=10) were included, all quantitative and predominantly cross-sectional. Populations ranged from patients with chronic conditions to healthcare professionals and laypersons. Outcomes assessed included empathy (7 studies), clarity/information quality (6), satisfaction or usefulness (4), and trust perceptions (2). In six direct comparisons of AI- versus physician-generated responses, LLMs were rated significantly higher in empathy in five studies. One large study found chatbot replies were judged empathetic in 45.1% of cases versus 4.6% for physician replies (OR ~9.8, P<.001). Similarly, ChatGPT-4 answers scored higher in empathy on a 5-point scale than human-written responses (mean 4.18 vs 2.70, P<.001). One neurology study showed higher empathy scores (CARE scale +1.38, P<.01) for ChatGPT answers. Only one study found no significant empathy difference. LLM content was also longer and more information-rich, improving patient-perceived clarity and understanding. On the other hand, GPT-4 simplified pathology reports, increasing patient comprehension scores (7.98 vs 5.23/10, P<.001) and reducing consultation time by 70%. However, AI replies were sometimes less concise or less readable for low-literacy patients. In pooled analyses (4 studies, n=2,604), LLMs showed a large positive effect on empathy (SMD +1.05, 95% CI 0.45–1.65) and improved understanding (SMD +0.82, 95% CI 0.30–1.34). Patient satisfaction results were mixed. No study directly assessed long-term trust. Conclusions: Current evidence suggests LLM-based chatbots can enhance physician–patient communication by producing more empathetic, detailed, and understandable responses. These improvements may positively influence patient experience and engagement. However, LLMs may also generate overly lengthy or occasionally inaccurate advice, emphasizing the need for physician oversight. While meta-analytic findings are promising, robust randomized, controlled trials are needed to confirm benefits, assess trust outcomes, and define optimal clinical integration strategies.
Background: Exergames have emerged as promising interventions for promoting physical activity and preventing type 2 diabetes (T2D), especially among older adults. Kinect-based exergames have been show...
Background: Exergames have emerged as promising interventions for promoting physical activity and preventing type 2 diabetes (T2D), especially among older adults. Kinect-based exergames have been shown to enhance exercise adherence and support positive health outcomes. However, their high cost and reliance on specialized hardware limit widespread home-based adoption. Recent advancements in computer vision enable the use of monocular-camera-based systems, offering a potentially scalable and cost-effective alternative. Objective: This study aimed to evaluate the feasibility and user experience of monocular-camera-based exergames as a home-based physical activity intervention for older adults at risk for T2D. Methods: Forty-five community-dwelling older adults (aged 60–74) at high risk for T2D were recruited and randomized into three groups (n = 15 each): (1) Control (traditional offline exercise), (2) Kinect-based exergame, and (3) Monocular-camera-based exergame. Participants engaged in a six-week intervention, completing three 30-minute home sessions per week. Primary outcomes included exercise performance (heart rate and perceived fatigue) and intrinsic motivation; secondary outcomes included perceived enjoyment, challenge, and usability. One-way ANOVA was used for analysis. Results: Exercise performance was comparable across all groups, with no significant differences in heart rate or fatigue levels (p > 0.05). Intrinsic motivation was significantly higher in the Kinect (M = 35.13, SD = 3.20) and Monocular (M = 34.00, SD = 4.41) groups compared to the Control group (M = 26.06, SD = 1.87; p < 0.001), with no difference between the two exergame groups (p = 0.443). While most user experience measures showed no significant group differences, the Monocular group reported a higher perceived challenge (M = 3.45) than the Kinect group (M = 2.96; p = 0.012). Conclusions: Monocular-camera-based exergames are a feasible and effective solution for promoting physical activity among older adults at risk for T2D. They provide motivational and experiential benefits comparable to Kinect-based systems while requiring less costly and more accessible equipment. These findings support the potential of monocular systems as scalable tools for home-based chronic disease prevention. Clinical Trial: The trail was registered in ClinicalTrails.gov (NCT06950528).
Background: Frozen shoulder is a painful and disabling condition affecting approximately 5% of the population, often leading to prolonged impairment and incomplete recovery. While physiotherapy is the...
Background: Frozen shoulder is a painful and disabling condition affecting approximately 5% of the population, often leading to prolonged impairment and incomplete recovery. While physiotherapy is the mainstay of treatment, adherence remains suboptimal. Gamification—integrating game elements into rehabilitation—has shown potential to enhance motivation and accessibility, though its application in frozen shoulder management remains underexplored. Objective: To evaluate the efficacy of a fully gamified, self-directed home rehabilitation program (HomeRehab) using a laptop-based platform in improving shoulder function, pain, quality of life, sleep, and range of motion in patients with frozen shoulder. Methods: This pilot, single-group, pretest-posttest quasi-experimental study enrolled 20 patients diagnosed with unilateral frozen shoulder. Participants used a customised version of the Rehaboo! platform incorporating physiotherapist- and surgeon-guided exercises. Outcomes measured at baseline, 6, 12, and 24 weeks included the Oxford Shoulder Score (OSS), Disabilities of the Arm, Shoulder, and Hand (DASH), EQ-5D, Pittsburgh Sleep Quality Index (PSQI), and goniometric range of motion (RoM). Statistical analysis was conducted using Friedman tests and Wilcoxon signed-rank tests with Holm-Bonferroni correction. Results: 17 patients were included in the final analysis. The mean age was 58.2 years (SD = 8.9), with 11 males and 6 females. Over the 24-week period, participants demonstrated statistically and clinically significant improvements in several domains. The mean Oxford Shoulder Score (OSS) improved from 29.2 to 16.5 (p = .010), and the mean Disabilities of the Arm, Shoulder and Hand (DASH) score decreased from 63.4 to 41.1 (p = .010). Health-related quality of life also improved, with the EQ-5D score decreasing from 7.2 to 5.5 (p = .030). Sleep quality improved, as indicated by a reduction in Pittsburgh Sleep Quality Index (PSQI) scores from 5.7 to 3.1 (p = .030). Shoulder range of motion improved across all planes—abduction (95° to 132°), external rotation (23° to 48°), internal rotation (32° to 49°), and forward flexion (114° to 144°)—though these changes were not statistically significant. The gains exceeded minimal clinically important differences for OSS, DASH, and EQ-5D. Conclusions: This study supports the potential of a gamified, self-led rehabilitation program to deliver meaningful improvements in function, symptoms, and quality of life for individuals with frozen shoulder. Gamification may enhance accessibility and adherence, offering a promising alternative to traditional physiotherapy. Larger, controlled trials are needed to confirm non-inferiority and long-term efficacy.
Background: Serious games are increasingly recognized as effective tools for healthcare interventions, particularly for adolescents with behavioral and developmental needs. However, inconsistent desig...
Background: Serious games are increasingly recognized as effective tools for healthcare interventions, particularly for adolescents with behavioral and developmental needs. However, inconsistent design frameworks and limited integration of theoretical concepts challenge their scalability and impact. Understanding how these concepts are applied in serious game design is essential for enhancing their real-world impact. Objective: The objective of this systematic review is to examine the current state of the art in the use of serious gaming interventions in healthcare for adolescents with behavioral or developmental issues. The review will focus on elucidating the elements involved in how these games are designed and can contribute to learning. The review is conducted from the theoretical framework perspectives of boundary crossing, transfer and a model of reality. Methods: A total of five databases (PubMed, Scopus, ERIC, PsycINFO and EMBASE) were searched for relevant titles and abstracts. The databases were identified as relevant and cover a wide range of published research into health and social science. Results: A total of 34 relevant studies were included in the review, which covered a range of serious gaming artefacts with the objective of identifying learning or development opportunities for adolescents with behavioral or developmental issues. Conclusions: This review highlights the transformative potential of serious games in healthcare, particularly for individuals with developmental and behavioral needs, by fostering skill acquisition, collaboration, and real-world application. Despite their potential, the development of serious games requires a more structured integration of theoretical frameworks to ensure scalability, replicability, and sustained impact. Future research should prioritize standardized methodologies, longitudinal evaluations, and a focus enhanced collaboration.
Background: The medical interview remains a cornerstone of clinical training. There is growing interest in applying generative artificial intelligence (AI) in medical education, including medical inte...
Background: The medical interview remains a cornerstone of clinical training. There is growing interest in applying generative artificial intelligence (AI) in medical education, including medical interview training. However, its utility in culturally and linguistically specific contexts, including Japanese, remains underexplored. This study investigated the utility of generative AI for Japanese medical interview training. Objective: This pilot study aimed to evaluate the utility of generative AI as a tool for medical interviews training by comparing its performance with that of traditional face-to-face training methods using a simulated patient. Methods: We conducted a randomized crossover pilot study involving 20 postgraduate year 1-2 physicians from a university hospital. Participants were randomly allocated into two groups. Group A began with an AI-based station on a case involving abdominal pain, followed by a traditional station with a standardized patient presenting chest pain. Group B followed the reverse order, starting with the traditional station for abdominal pain, and subsequently within AI-based station for the chest pain scenario. In the AI-based stations, participants interacted with a GPTs-configured platform that simulated patient behaviors. GPTs are customizable versions of ChatGPT adapted for specific purposes. The traditional stations involved face-to-face interviews with a simulated patient. Both groups used identical, standardized case scenarios to ensure uniformity. Two independent evaluators, blinded to the study conditions, assessed participants' performances using six defined metrics: patient care and communication, history taking, physical examination, accuracy and clarity of transcription, clinical reasoning, and patient management. A 6-point Likert scale was employed for scoring. Discrepancy between the evaluators resolved through discussion. To ensure cultural and linguistic authenticity, all interviews and evaluations were conducted in Japanese. Results: AI-based stations scored lower across most categories, particularly in patient care and communication, than traditional stations (4.48 vs. 4.95, P=.009). However, AI-based stations demonstrated comparable performance in clinical reasoning, with a non-significant difference (4.43 vs. 4.85, P=.10). Conclusions: The comparable performance of generative AI in clinical reasoning highlights its potential as a complementary tool in medical interview training. One of its main advantages lies in enabling self-learning, allowing trainees to independently practice interviews without the need for simulated patients. Nonetheless, the lower scores in patient care and communication underline the importance of maintaining traditional methods that capture the nuances of human interaction. These findings support the adoption of hybrid training models that combine generative AI with conventional approaches to enhance the overall effectiveness of medical interview training in Japan. Clinical Trial: UMIN-CTR UMIN000053747; https://center6.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000061336.
Background: Background: Patient safetyisessentialto thequalityofcaregivento patients, and it remainsachallenge for countries at all stages of development. There appears to be a common acceptance of th...
Background: Background: Patient safetyisessentialto thequalityofcaregivento patients, and it remainsachallenge for countries at all stages of development. There appears to be a common acceptance of the necessity of building patient safety culture within health care organizations. Hospitals with a positive patient safety culture are transparent and fair with staff when incidents occur, learn from mistakes, and rather than blaming individuals, look at what went wrong in the system. Health care providers are willing to report the errors but, due to poor reporting system and culture of blame and shame, there exists struggle of disclosure of adverse events. Objective: Objective: This studyaimed To assess incident reporting behavior and associated factors among Nurses working in Addis Ababa Public hospitals in Addis Ababa, Ethiopia, 2024. Methods: Methods: A cross-sectional institutional-based study was conducted with a total of 233 randomly selected participant samples drawn from six public hospitals in Addis Ababa, between July 16 and September 16, 2024. A structured interviewer-administered questionnaire and observational checklist based on previous studies were employed for data collection. Bivariate and multivariate analysis used a binary logistic regression model to determine the relationships between the dependent variables and the independent variables and the strength of association was calculated as Adjusted Odds Ratios (AOR),and 95% Confidence Interval (CI) at <0.05 p-value. Results: Result: A total of 245 study subjects were recruited. 233 were interviewed yielding response rate of 95.8% of the 233 participants were female (162(69.5%)), and had a degree (145 (62%)). The largest group of study participants reported having 6-10 years of experience in the hospital (53.5%) and in the current unit (40%). Additionally, Degree nurse participants had a 3.027 times greater odd ofofreporting patient safety incident when compared to Diploma Nurse (AOR: 3.027; 95%CI: 1.736-5.279). Nursesthat reported more than 5 years (31.7%) of experience had a 1.71 times greater odd of reporting safety incidents compared to nurses that reported less than 5 years of experience (AOR: 1.71; 95%CI: 1.236- 2.379). Conclusions: Conclusion: - Safety incident reporting culture score of participants was less than 70%. Training on patient safety and incident reporting positively affects reporting. Clear guidelines should be put onpatient safety and incident reporting. Focus should be given to trainings. Clinical Trial: Safety culture, reporting,AmongNurseAddisAbaba.
Background: The field of digital health has grown rapidly, offering transformative potential to improve healthcare access. However, older women face unique challenges in utilizing digital health inter...
Background: The field of digital health has grown rapidly, offering transformative potential to improve healthcare access. However, older women face unique challenges in utilizing digital health interventions, such as socioeconomic disparities and the dual burden of chronic conditions, limiting their access to tailored services and exacerbating health inequalities.Existing research rarely focuses on older women in community settings or the factors influencing gender equality in digital health. A comprehensive understanding of their challenges in accessing and using digital health resources is lacking. Objective: To explore the factors influencing gender equality in digital health for community-dwelling older women Methods: In this descriptive, qualitative study, 19 community-dwelling older women participated in semi-structured interviews. Both conventional and directed content analyses were used to code data and determine themes. Results: Four themes were identified. 1) Values and beliefs relevance of adaptive self-management. Older women often face stigma related to gynecological disorders, impacting their awareness and willingness to discuss these issues. 2) Digital solution as a buffer between older women's health and the role of family caregivers. Digital solutions provide flexibility, reduce travel requirements, and promote accountability in care. 3) Prefer person-centered digital solution and implementation process. They prefer simplified, age-appropriate designs, particularly favoring Microsoft applets, and seek tools for self-directed use. 4) Relationship embedded digital ecology. Online peer support and remote assistance are essential for their health, as is enhanced support for family physicians, highlighting the potential for accessible healthcare through digital platforms. Conclusions: This study underscores the conflict between self-care and caregiving among older women and shows how digital technology can help ease this tension. By addressing the needs of women caregivers and leveraging digital tools, we can better support them in managing their health and caregiving duties.
The seemingly endless amount of information available on the internet at the touch of a few buttons has increasingly served as a resource for individuals to find health information over the last 20+ y...
The seemingly endless amount of information available on the internet at the touch of a few buttons has increasingly served as a resource for individuals to find health information over the last 20+ years. Google Trends data shows that number of searches for the common primary care symptoms “cough”, “sore throat”, and “stomach pain” in the United States grew by 208%, 290%, and 490%, respectively, between 2004 and 2019. However, over the same time, United States population-adjusted outpatient visits for cough and sore throat decreased by 41.5% and 40%, respectively, and stomach pain visits remained unchanged. This suggests that, on a population level, people found online health information about some common, acute symptoms to be reassuring or informative enough to not feel the need to seek care from a primary health care provider. With the rapid evolution and availability of more detailed and personalized information from various large language models it is likely that internet search habits of users will continue to grow, and with it, continue to transform interactions with the healthcare system.
Lung cancer continues to pose a global health burden, with delayed diagnosis contributing significantly to mortality. This study aimed to identify the most predictive behavioural, physiological, and p...
Lung cancer continues to pose a global health burden, with delayed diagnosis contributing significantly to mortality. This study aimed to identify the most predictive behavioural, physiological, and psychosocial factors associated with lung cancer in a young adult population using a multivariate logistic regression framework. A dataset of 276 respondents was analysed after removing duplicates from an original sample of 309. The dependent variable was self-reported lung cancer status, while independent variables included smoking behaviour, symptoms such as fatigue and coughing, and indicators of chronic disease and psychosocial stress. Univariate and bivariate analyses were conducted prior to model development. Nine predictors demonstrated statistical significance and were retained in the final model. The model exhibited strong predictive performance, achieving an AUC of 0.9625 and Tjur’s R² of 0.566, with no evidence of multicollinearity among predictors. Fatigue, chronic disease, coughing, and swallowing difficulty emerged as the most influential risk factors, while smoking had a comparatively smaller effect size, likely due to the young age profile of participants. Peer pressure and yellow fingers were also significant, offering novel contextual insights into behavioural risk adoption. The findings support the integration of multidimensional, low-cost, self-reported indicators into lung cancer screening protocols, especially in resource-limited settings. This study provides a data-driven foundation for developing early detection models and public health interventions tailored to younger populations. Future research should incorporate longitudinal and biomarker data to enhance causal inference and predictive accuracy.
Background: Artificial intelligence (AI) is reshaping various aspects of dental practice, from diagnosis to treatment planning. However, the adoption of AI technologies by dental professionals centers...
Background: Artificial intelligence (AI) is reshaping various aspects of dental practice, from diagnosis to treatment planning. However, the adoption of AI technologies by dental professionals centers on their attitudes, perceptions, and knowledge, which have not been systematically reviewed across a broad spectrum of dental providers. Objective: This systematic literature review (SLR) aims to investigate current evidence on attitudes, perceptions, and knowledge of AI among dental professionals. Methods: A comprehensive search was conducted in four databases PubMed, Scopus, ScienceDirect, and Google Scholar up to November 6, 2024. A total of 324 papers were identified, with 44 included studies after rigorous screening. Studies were assessed for quality and relevance, focusing on attitudes, perceptions, and knowledge of dental professionals toward AI applications in dentistry. Results: The included papers highlighted a growing interest in AI, with the majority of studies (n=17, 38.6%) published in 2024. Geographically, the bulk of research came from India (n=16, 36.4%) and Saudi Arabia (n=7, 15.9%), indicating a significant focus in these regions. Most studies were cross-sectional (97.7%) and used non-probability sampling (75.0%). Regarding AI applications, the most common focus was on general dentistry AI applications (n=29, 65.9%). Notably, 86.4% of studies reported using validated questionnaires or interviews. Awareness and knowledge of AI varied, with significant gaps highlighted in AI training and curriculum integration. Attitudes and perceptions were generally positive, with a cautious outlook on its integration into clinical practice. Concerns were evident regarding AI's potential to replace human roles, with mixed feelings about AI’s capacity to supplant professional expertise. Conclusions: Dental professionals exhibit a cautiously positive attitude towards AI, recognizing its potential to enhance diagnostic accuracy and treatment outcomes. However, substantial gaps in knowledge and training underscore the need for enhanced educational programs. The findings advocate for the strategic integration of AI technologies into dental curricula and practice, with a focus on ethical considerations and ongoing professional development to accommodate future technological advancements.
Background: There is an increasingly diverse range of mobile applications (apps) and digital health devices available to help patients manage their health. Despite evidence for the effectiveness of su...
Background: There is an increasingly diverse range of mobile applications (apps) and digital health devices available to help patients manage their health. Despite evidence for the effectiveness of such technologies in specific care contexts, their potential has not been fully realized as adoption remains low. Such limited uptake can have direct implications for the intended benefits of these technologies. Objective: To understand what matters most to US military Veterans when deciding whether to use mobile health apps or devices (i.e., digital health technologies (DHTs)) to manage health. Methods: Longitudinal survey data collected from a national sample of Veterans who receive care from the Veterans Health Administration (VHA). Results: Among the Veterans included in our analytic cohort (n=857), most (87.0%) reported currently using or having used ≥1 devices in the past to manage their health and most also reported using either VHA or non-VHA health apps (78.3%). Considerations most frequently endorsed as “Very Important” by Veterans when deciding whether to use DHTs included receiving secure messages from their healthcare team (73.2%), knowing their data would inform their care (56.5%), and recommendations from providers (52.6%). Conversely, considerations most frequently endorsed as “Not at All Important” included information on social media (70.5%), community organization support (66.4%), and encouragement from peers (56.7%). Conclusions: Understanding what matters most to patients when they are deciding to adopt a technology for their health can, and should, inform the development of implementation strategies and other approaches to enhance health-related technology use. Our results suggest that, for Veterans, recommendations from healthcare team members and knowing data will be used in clinical care are more important than information from social media, community sources, or peers when deciding to use DHTs. Based on our findings, direct communication from healthcare team members to patients, either in-person or electronically, should be encouraged to promote DHT adoption and use. Clinical Trial: N/A.
Abstract
Acute promyelocytic leukemia (APL), a subtype of acute myeloid leukemia (AML), is characterized by the t(15;17)(q22;q21) translocation, resulting in the PML/RARα fusion protein. All-trans r...
Abstract
Acute promyelocytic leukemia (APL), a subtype of acute myeloid leukemia (AML), is characterized by the t(15;17)(q22;q21) translocation, resulting in the PML/RARα fusion protein. All-trans retinoic acid (ATRA) is an effective treatment for APL. Among the most severe side effects of ATRA is differentiation syndrome. While skin toxicity is common, scrotal lesions, including ulcerations, are rarely reported, and their pathogenesis remains unclear. We present a case of a 24-year-old male diagnosed with APL who developed painful scrotal ulcers on day 23 of ATRA therapy. These ulcers responded to the discontinuation of ATRA and treatment with topical corticosteroids. Discontinuing ATRA can potentially compromise the hematological response, leading most clinicians to continue ATRA in combination with steroid therapy. However, ATRA should be discontinued if steroid therapy fails. Awareness of this rare adverse effect is essential to ensure timely and appropriate therapeutic management.
Background: Recent advances in large language models (LLMs) have enabled the development of multimodal systems capable of interpreting both text and medical images. These models show promise in automa...
Background: Recent advances in large language models (LLMs) have enabled the development of multimodal systems capable of interpreting both text and medical images. These models show promise in automating clinical tasks such as diagnostic image review. However, their real-world performance, especially in high-stakes scenarios like detecting COVID-19 pneumonia on chest X-rays (CXRs), remains underexplored. Objective: To assess the diagnostic accuracy of Gemini 2.0, a state-of-the-art multimodal LLM, in detecting COVID-19 pneumonia from CXRs and compare its performance to prior evaluations of ChatGPT-4 Turbo and ChatGPT-4o on the same dataset. Methods: We used the publicly available COVIDx CXR-4 dataset (n=20,000), equally divided between pneumonia-positive and negative cases. Each image was submitted to Gemini 2.0 via its API with a standardized diagnostic prompt. Output responses were analyzed to calculate accuracy, precision, recall, and F1-score. Results were compared with prior benchmark evaluations using ChatGPT models. Results: Gemini 2.0 achieved an overall diagnostic accuracy of 45%. Precision and recall for pneumonia-positive cases were 34% and 11%, respectively. For pneumonia-negative cases, precision was 47% and recall 79%. Compared to ChatGPT-4 Turbo (54.1%) and ChatGPT-4o (61.2%), Gemini 2.0 demonstrated inferior performance on the same dataset. Conclusions: Despite its multimodal capabilities, Gemini 2.0 underperformed compared to other LLMs in detecting COVID-19 pneumonia from CXRs, particularly in sensitivity. These findings underscore the limitations of current multimodal AI systems for clinical imaging and highlight the need for further development and validation prior to deployment in diagnostic settings. Clinical Trial: N/A
Background: AI-driven mobile health (mHealth) applications are emerging as a promising tool for health management, yet little is known about users' psychological perceptions and attitudes towards thes...
Background: AI-driven mobile health (mHealth) applications are emerging as a promising tool for health management, yet little is known about users' psychological perceptions and attitudes towards these technologies. Understanding these aspects is crucial for both the appropriate design and the effective use of these technologies, ensuring the psychological and physical well-being of potential end users. Objective: This study aimed to investigate the attitudes and perceptions of young adults towards a possible use of AI-driven mHealth applications, focusing on the perceived benefits and potential concerns related to their future adoption. Methods: A qualitative focus group methodology was employed. Fifteen participants (12 men, 3 women, mean age 27 years) were recruited. Data were analyzed using thematic analysis to identify key themes influencing engagement with these technologies. Results: Four main themes emerged: “Usability,” which emphasized the importance of user-friendly, personalized experiences; “Innovation and Reliability,” where participants expressed both enthusiasm and skepticism towards AI’s potential; “Affectivity and Interaction with AI,” highlighting mixed opinions on the emotional impact of AI interactions; and “Perceived Risks,” which focused on concerns regarding data privacy and the need for human supervision. These factors contributed to ambivalent attitudes towards AI-driven mHealth apps, with some participants being open to adoption, while others remained cautious. Conclusions: To foster greater engagement with AI-driven mHealth apps, developers should prioritize usability, trust, emotional support, and privacy issues, considering users’ psychological needs and expectations. The findings offer valuable insights for designing more user-oriented mHealth solutions. Further research should explore how perceptions evolve with direct experience and long-term use.
Background: Accurate and accessible measurements of inflammatory biomarkers are crucial for the diagnosis and monitoring of inflammatory diseases. The gold-standard C-reactive protein (CRP) requires v...
Background: Accurate and accessible measurements of inflammatory biomarkers are crucial for the diagnosis and monitoring of inflammatory diseases. The gold-standard C-reactive protein (CRP) requires venipuncture, which, despite providing high-quality samples, can cause discomfort, anxiety, and pain, particularly in vulnerable populations such as elderly patients. It is also resource intensive, unsuitable for remote or at-home use, and lacks continuous monitoring capability. These limitations limit patient autonomy and self-management, potentially leading to poorer prognosis due to delays in assessment and medical treatments. As digital health technologies advance, there is increasing interest in leveraging digital biomarkers for remote and real-time monitoring of systemic inflammation (SI). Digital biomarkers derived from non-invasive biofluids could provide a scalable solution for tracking inflammatory status, offering a patient-centered alternative to traditional blood-based assessments. To date, however, there is no consensus on the most suitable modality for assessment or its digitization potential. Therefore, a comprehensive evaluation of the feasibility, reliability, and patient acceptability towards non-invasive, digital inflammatory biomarkers is needed. Objective: Our aim is to evaluate the feasibility of various non-invasive methods to assess inflammatory markers and identify the optimal modality for predicting serum CRP levels. Methods: Inflammatory biomarkers were assessed in 20 participants (10 patients with SI defined as a CRP level >5 mg/l and 10 controls) using six non-invasive samples (urine, sweat, saliva, exhaled breath, core body temperature, and stool samples) alongside serum samples. Patient preferences were retrieved via a questionnaire. Mann-Whitney U test, Spearman’s correlation, and all-subset regression were conducted to assess the relationships between serum and non-serum biomarkers and to identify optimal predictive models for serum CRP levels. Results: CRP levels were significantly elevated in the inflammation group compared to controls in urine (median: 4.5 vs. 0.69 μg/mmol, p=0.001) and saliva (median: 4910 vs. 473 pg/ml, p=0.001). Urine and saliva CRP levels strongly correlated with serum CRP (rsp=0.886, p<0.001; rsp=0.709, p=0.0006). The multi-modal model using urine and saliva CRP predicted serum CRP levels with 76.1% outperforming single-modality models. Patient favored urine and saliva tests over blood tests. Conclusions: Urine and saliva represent promising non-invasive alternatives to traditional blood tests for assessing CRP, enabling more accessible and less invasive diagnostic and monitoring approaches.
Background: Telemedicine is an effective and promising strategy, especially for the initial stages of a home-based therapeutic exercise program. We designed a randomized controlled trial (RCT) to inve...
Background: Telemedicine is an effective and promising strategy, especially for the initial stages of a home-based therapeutic exercise program. We designed a randomized controlled trial (RCT) to investigate the effects of Tai Chi and walking on cognitive function in older adults with Type 2 Diabetes Mellitus (T2DM) using wearable devices in a mobile healthcare model. Objective: We designed a randomized controlled trial (RCT) to investigate the effects of Tai Chi and walking on cognitive function in older adults with Type 2 Diabetes Mellitus (T2DM) using wearable devices in a mobile healthcare model. Methods: The study was a randomized controlled trial in which participants were randomized (1:1:1) to receive usual care, fitness walking, or Tai Chi exercise. All indicators were assessed at baseline and 12-week follow-up. The usual care includes traditional diabetes education. Participants in the fitness walking group performed walking exercises on a treadmill under the supervision of a researcher three times a week for 12 weeks. Participants in the Tai Chi group practiced 24-style Simplified Tai Chi through live video streaming under the guidance of professors and professionals. In this 12-week program, participants underwent continuous glucose monitoring (CGM) using Guardian Sensors 3, CGM sensors attached to the upper arm. All participants will carry bracelets to record their heart rate, sleep parameters, and steps. The primary outcome was the Montreal Cognitive Assessment (MoCA) at 12 weeks. Secondary outcomes included other cognitive subdomain tests, and blood metabolic indices. Results: After 12 weeks of intervention, the Tai Chi exercise group showed a significant improvement in MoCA scores from baseline (23.83 [17.79, 25.66] vs. 21.42 [17.11, 24.74], P=0.027). The fitness walking exercise group showed an improvement in MoCA scores (22.94 [18.05, 23.98] vs. 21.58 [17.35, 24.12], P=0.083), but did not reach statistical significance. Conclusions: In summary, this study showed that web-based exercise therapy for patients may help improve exercise therapy's effectiveness in cognitive function among older T2DM. Tai Chi has significant advantages in improving cognitive function and sleep quality, while fitness walking, although also beneficial, is relatively weak in these areas. Clinical Trial: All participants signed an informed consent form, and the Institutional Review Boards of all participating institutions approved the study (2024-013-01). The study was registered on the Chinese Clinical Trial Register, ChiCTR2200057863(19/03/2022).
Background: While digital innovation, including chatbots, offers a potentially cost-effective means to scale public health programs in low-income settings, user engagement rates remain low. Barriers t...
Background: While digital innovation, including chatbots, offers a potentially cost-effective means to scale public health programs in low-income settings, user engagement rates remain low. Barriers to participant engagement (eg, perceived difficulty of use, busyness, low levels of digital literacy) may exacerbate inequality when adopting digital-only interventions as alternatives to in-person programs. Objective: This cross-sectional study nested within a 2x2 clustered factorial trial that followed the Multiphase Optimization Strategy (MOST) principles investigated the relationship between behavioral determinants (ie, human and socioeconomic characteristics that facilitate the use of digital health interventions) and caregiver intention to use a digital public health intervention, ParentText, an open-source, rule-based parenting chatbot designed to promote positive parenting, improve adolescent health and reduce risky behaviors. Methods: Caregivers of adolescent girls (10-17 years; N=1,034 caregivers) were recruited by implementation partners from a community-wide project aimed at HIV prevention in two districts of Mpumalanga, South Africa. A Digital Health Engagement Model (DHEM) was adapted from the technology acceptance model, the PEN-3 model theoretical frameworks, and the Theory of Planned Behavior to investigate the relationship between behavioral determinants and the intentions of caregivers to engage in ParentText. Community facilitators administered baseline surveys to caregivers during intervention onboarding. Regression models tested associations between behavioral determinants (ie, perceived ease of use, perceived usefulness, attitude, hedonic motivation, habit, price value and social influence) and intentions of caregivers to use the parenting chatbot. Interaction effects were explored to examine whether individual-level sociodemographic and psychosocial characteristics moderate associations between overall behavioral determinants and intentions to use the chatbot. Results: Caregivers reported a mean of 2.85 (SD 0.79) and 2.90 (SD 0.72) out of a maximum score of 4 regarding their intention to use their mobile data and to continue using ParentText in the future, respectively. Overall behavioral determinants predicted by 78% (OR = 1.78, 95% CI: 1.73 – 1.82) the intentions of caregivers to spend mobile data and by 87% (OR = 1.87, 95% CI: 1.82 – 1.91) their intentions to use ParentText in the future. Moderator analysis suggested the interaction effects of age, paternal absence, financial efficacy, and stress on the relationship between overall behavioral determinants and intention outcomes. Conclusions: This is the first known study to investigate the associations between overall behavioral determinants and participant intentions to use a parenting chatbot in a low-income setting. This study identifies behavioral determinants of engagement for improved delivery of DHIs, considering the need to provide low-cost, scalable parenting support through digital platforms that engage parents, especially those in low-income contexts. Future research should explore methods to investigate mechanisms that regulate behavior to enhance the development of digital health interventions. Clinical Trial: Open Science Framework (OSF); https://doi.org/10.17605/OSF.IO/WFXNE
Background: Chronic obstructive pulmonary disease (COPD) imposes a significant burden on patients and society, and the majority of COPD patients in China manage their condition at home long-term but o...
Background: Chronic obstructive pulmonary disease (COPD) imposes a significant burden on patients and society, and the majority of COPD patients in China manage their condition at home long-term but often fail to achieve the desired outcomes. Online Nursing Consultation Services (ONCS) are an effective intervention to help patients improve their disease prognosis. In recent years, artificial intelligence (AI) technology has garnered considerable attention. Medical institutions in China are exploring and planning ONCS combined with AI, and this study aims to understand the preferences and willingness to pay for ONCS among COPD patients. Objective: The findings aim to bridge the gap between existing ONCS provisions and patients’ actual needs, promote innovative AI applications in chronic disease management, and ultimately improve COPD patients’ care experiences and health outcomes. Methods: We surveyed 224 COPD patients in Luoyang City, China, collecting their demographic information and responses to a discrete choice experiment (DCE) involving five attributes: service provider, response time, response accuracy, service content, and service cost. Results: The results revealed that COPD patients favoured ONCS provided by a combination of nurses and AI as service providers (β = 0.36), preferred faster response time (β = 3.38), higher response accuracy (β = 1.74), and chronic nursing as the service content (β = 0.92), all while expecting lower service costs. The relative importance (RI) of these attributes was distributed as 18.1%, 21.4%, 19.2%, 28.7%, and 12.6%, respectively. Specifically, participants were willing to pay an additional ¥22.3 for a shift from nurses to a combination of nurses and AI, ¥2.3 more for each minute reduction in response time, ¥1.5 more for every 1% increase in response accuracy, and ¥57.1 more for a shift from health education to chronic nursing. Conclusions: This study thoroughly investigated COPD patients' preferences for ONCS. The findings offer valuable insights for optimizing these services. The findings suggest that healthcare organizations should actively integrate services that combine nurses and AI in order to reduce response time, enhance accuracy, effectively support chronic disease management, and minimise service costs. Clinical Trial: LWLL-2023-09-28-01
Background: Resources and support to address intimate partner violence among Latina immigrants are essential for improving health outcomes and creating safer, healthier communities. However, immigrant...
Background: Resources and support to address intimate partner violence among Latina immigrants are essential for improving health outcomes and creating safer, healthier communities. However, immigrants face significant obstacles in obtaining essential resources that provide information as well as formal and informal support. These may include language barriers, social isolation, lack of transportation, cultural differences, and stigma. Although numerous interventions aimed at supporting Latina immigrants on issues of intimate partner violence exist, there is limited information about their approach to addressing and mitigating access barriers. Objective: This paper presents the protocol for a systematic review designed to evaluate the strategies used by existing interventions to address and reduce barriers to accessing information and support related to IPV in the following domains: approachability, acceptability, availability, affordability, and appropriateness. The objectives are as follows: (1) Identify intervention studies conducted with Latina immigrants who have experienced IPV that mitigate at least one of the five areas of access to information and support; (2) Examine the study data (e.g., intervention design, recruitment strategies, sample sizes, setting, theoretical underpinnings, duration and content of interventions, and outcome evaluations). Methods: A mixed-methods systematic review will be conducted using the PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-analysis Protocols) checklist. The research will proceed iteratively among the different authors. The team, which includes a librarian, developed a search strategy for six databases: APA PsycInfo (EBSCOhost), CINAHL Plus with Full Text (EBSCOhost), PubMed, Scopus, Social Work Abstracts (EBSCOhost), and Sociological Abstracts (ProQuest). A literature search covering studies published from inception until the final search date of October 15, 2024, has been conducted. Two reviewers then proceeded individually and independently to conduct double selection of titles, abstracts, and full texts. A third reviewer resolved any disagreements between the reviewers. Data extraction will be performed by a reviewer and validated by a senior researcher. A narrative approach will synthesize and report the strategies used to address and mitigate access barriers to information and support for women who have experienced IPV. Results: The search strategy and literature review were finalized in January 2025. A total of 929 references were identified after duplicates were removed, and of these, 181 progressed to full-text review. The publication of the systematic review is scheduled for July 2025. Conclusions: The present mixed-methods systematic review will offer a comprehensive analysis of the current state of knowledge regarding how barriers to accessing information and support are identified and mitigated in interventions for Latina immigrants and their effects on outcomes. Clinical Trial: PROSPERO registration number 42024622171
Background: Post-stroke depression (PSD) is a prevalent complication that arises after a stroke. In recent years, a number of systematic reviews have been published on the use of moxibustion and acupu...
Background: Post-stroke depression (PSD) is a prevalent complication that arises after a stroke. In recent years, a number of systematic reviews have been published on the use of moxibustion and acupuncture for PSD, however, the results have not been entirely consistent. Objective: We conducted a systematic review to assess the quality of the evidence, reporting, and methodology of systematic reviews on acupuncture and moxibustion for PSD. Methods: Systematic reviews of acupuncture and moxibustion for PSD published before August 10, 2024 were searched in eight databases: PubMed, Embase, Cochrane Library, Web of Science, and CNKI, Wanfang, VIP, CBM. The systematic reviews and meta-analyses of randomized controlled trials comparing moxibustion and acupuncture for curing PSD were included. The methodological quality, reporting quality, and evidence quality were assessed using AMSTAR 2, PRISMA 2020, and GRADE, in that order. Results: There were a total of 24 studies included. According to AMSTAR 2's results, the methodological quality of all studies are "low" or "critically low". According to PRISMA 2020, one study had seriously inadequate reporting quality, and 21 studies had somewhat inadequate reporting quality. And the included literature had a range of quality of evidence from very low to moderate. Conclusions: The majority of the included systematic reviews interpreted the findings to suggest that acupuncture is beneficial for PSD. However, systematic reviews' methodological, reporting, and evidence quality should be improved. More robust evidence requires larger, multicenter, thoroughly conducted randomized controlled trials as well as high-quality systematic reviews. Clinical Trial: CRD42024576753
Background: Teeth that have undergone endodontic treatment are more likely to fracture because of the remarkable loss of tooth structure. Various post systems, like prefabricated carbon fiber posts, c...
Background: Teeth that have undergone endodontic treatment are more likely to fracture because of the remarkable loss of tooth structure. Various post systems, like prefabricated carbon fiber posts, customized glass fiber posts, have been used to restore endodontically treated teeth (ETT). However, the effectiveness of these in enhancing fracture resistance remains a subject of debate. Objective: To evaluate and compare the fracture resistance of endodontically treated teeth restored using 3 different post types: prefabricated carbon fiber post, custom-made glass fiber post, and SFRC-relined fiber post. Methods: A total 30 extracted human teeth would undergo endodontic treatment and will be segregated into 3 groups based on the post type
1: Pre-fabricated carbon fiber posts
2: Customized glass fiber posts
3: SFRC-relined fiber posts
The samples would be subjected to a universal testing machine to assess their fracture resistance. Data will undergo statistical analysis using ANOVA and post-hoc test. Results: Mean fracture resistance is expected to be highest in the SFRC-relined fiber post group, followed by customized glass fiber post group, and lowest of prefabricated carbon fiber post group. Statistically significant differences are anticipated among groups (p < 0.05). The SFRC-relined fiber posts are also expected to demonstrate more favorable failure modes compared to the other groups. Conclusions: The study suggests that SFRC-relined fiber posts provide superior fracture resistance and more favorable failure modes in comparison with prefabricated carbon fiber and custom-made glass fiber post. This finding highlights potential clinical benefits of using SFRC-relined fiber post. Clinical Trial: Since this investigation will be conducted entirely as an in vitro study, registration with the Clinical Trials Registry - India (CTRI) will not be applicable and therefore not required.
Introduction:
Non-inferiority (NI) trial designs that investigate whether an experimental intervention is no worse than standard of care have been used increasingly in recent years. The robustness of...
Introduction:
Non-inferiority (NI) trial designs that investigate whether an experimental intervention is no worse than standard of care have been used increasingly in recent years. The robustness of the conclusions are in part dependent on the analysis population set used for the analysis. The intention-to-treat (ITT) analysis has been thought to be anti-conservative compared with the per-protocol (PP) analysis in the NI setting.
Methods and analysis:
We aim to conduct a methodological review assessing the analysis population set used in NI trials. A comprehensive electronic search strategy will be used to identify studies indexed in Medline, Embase, Emcare, and Cochrane Central Register of Controlled Trials (CENTRAL) databases. Studies will be included if they are non-inferiority trials published in 2024. The primary outcome is the analysis population used in the primary analysis of the trial (ITT or PP). Secondary outcomes will be the NI margin, effect estimates, point estimates, and corresponding confidence intervals of the analysis. Analysis will be done using descriptive statistics.
Discussion:
This methodological survey of NI trials will describe the population analysis set used in primary analysis and assess factors which could be associated with each analysis
Background: The anesthesia and critical care residency program in Morocco is a four-year, time-based training program whose effectiveness is evaluated in our study through the performance of residents...
Background: The anesthesia and critical care residency program in Morocco is a four-year, time-based training program whose effectiveness is evaluated in our study through the performance of residents and the factors affecting it, based on the core competencies established by the Moroccan Board of Anesthesiology and Critical Care (MBACC). Objective: To describe anesthesia and critical care residents’ performance and its affecting factors. Methods: We conducted a single-center prospective survey in January 2024, using a self-assessment questionnaire of technical skills related to residents' practice. For each skill, we addressed questions quantifying a given item's difficulty or success rate. An overall performance composite score was calculated based on the scores obtained for each skill assessed. A multivariate analysis was performed to determine the factors affecting this performance. Results: We included 66 residents. Their end-of-course overall performance met MBACC requirements at the end of the curriculum (72.3 [68.5-75.7] for a maximum score of 100), with a progression marked by a plateau between the second and third year. Multivariate analysis identified a prior experience to residency, shift leadership, and the number of patients anesthetized per day as factors improving the overall performance, while critical care-induced stress, shift-induced stress, and the number of shifts per week reduced performance. Conclusions: The progression of residents' overall performance is eligible for optimization through an introduction to critical care, notably via simulation, to reduce the stress during practice and acquire sufficient experience to occupy a chief position during shifts while limiting the number of weekly shifts. The formulation of recommendations requires a higher level of proof, which implies an external confirmatory study on a multi-center scale.
Background: Background: Paramedics face frequent exposure to trauma and intense occupational stress, often under conditions of limited psychological support and ongoing stigma. Digital mental health i...
Background: Background: Paramedics face frequent exposure to trauma and intense occupational stress, often under conditions of limited psychological support and ongoing stigma. Digital mental health interventions have the potential to offer accessible, confidential, and tailored support. However, their acceptability and design must be informed by the lived experiences of paramedics to ensure effectiveness. Objective: Objective: This study aimed to explore UK paramedics’ experiences of trauma exposure in the workplace and their views on the design and delivery of digital mental health interventions. Methods: Methods: Semi-structured interviews were conducted with 22 UK paramedics. Participants were recruited through purposive and snowball sampling. Interviews were transcribed verbatim and analysed using reflexive thematic analysis. Ethical approval was obtained, and trauma-informed principles were applied throughout data collection and analysis. Results: Results: Five key themes were identified: (1) It Has to Feel Easy to Use - highlighting the need for digital tools that reduce cognitive burden and are accessible during unpredictable shifts; (2) Make It Fit My Needs - calling for interventions specifically designed for paramedics, with lived-experience-informed language and delivery; (3) We Need to Talk to Each Other - describing a strong desire for peer connection while recognising barriers such as stigma and shift pressures; (4) I Need to Know It’s Safe - emphasises the importance of anonymity, data privacy, and psychological safety; and (5) Support Needs to Feel Human - reinforcing the value of integrating digital tools with human connection and professional services. Participants expressed strong support for an app-based solution that offers anonymity, rapid accessibility, and flexibility, while preserving opportunities for human interaction. Conclusions: Conclusions: Paramedics face unique mental health challenges that are not adequately addressed by existing services. Digital mental health tools offer promise if they are carefully co-designed to reflect the realities of frontline work. Anonymity, usability, peer connection, and integration with existing support systems are critical to engagement. These findings offer actionable insights for the development of trauma-informed, context-sensitive digital mental health interventions for emergency service workers.
Background: Introduction
The current decade has seen much technological progress, but one of the most impactful and controversial areas has been Generative AI. Since AI was first discussed in the 195...
Background: Introduction
The current decade has seen much technological progress, but one of the most impactful and controversial areas has been Generative AI. Since AI was first discussed in the 1950’s [1] and especially since it exploded onto the technological scene in the 2020s, it has been a source of both wonder and fear about how it might help, hinder, and otherwise permanently alter human life. Today, in 2025, AI is a household term and a daily topic of news and discussion. It is easy to imagine a future in which AI is even more ubiquitous than it already is, but two of the most talked about areas of AI incorporation today are education and healthcare.
AI use has both supporters and detractors in education and healthcare. Proponents of AI in education tout its potential to individualize learning experiences and thereby increase student engagement; they also advocate for AI’s many supportive functions such as analyzing and managing various student data and streamlining administrative tasks [2]. Meanwhile, AI has been used in healthcare since the development of a glaucoma consultation program at Rutgers University in 1976 [1]. Since then, exponential advancements in AI system design have led to a proliferation of clinical uses. ChatGPT, for example, is being used in applications such as answering patient questions; assisting in clinical data analysis and decision making; responding to emergencies; assisting in practice management; and assisting in data management and other aspects of medical research [3].
Along with the advantages and benefits of AI’s educational and healthcare applications, each presents specific challenges and raises questions. Detractors of AI in education cite the potential for students to cheat with AI. Some also warn against students leaning too heavily on AI to the diminishment of their own critical thinking abilities [2]. In both education and healthcare, questions arise about output accuracy plus such ethical issues as data privacy and output bias, among others.
Health informatics lies at the intersection of technology and healthcare, and as health informatics educators, we realize that whether we welcome it or fear it, AI will play a large role in our students’ futures. More precisely, our students will graduate into a healthcare profession that is already regularly using AI, and they will work for employers who expect that they are well prepared to utilize it in their jobs. In fact, employers already prioritize AI competencies over prior non-AI experience [4] in the job candidates’ profiles and believe AI skill sets are more important than many others [5]. Objective: Knowing that technology, including AI, advances vastly more quickly than official university curricula, we felt it imperative to transition some of our assignments so our students could begin using AI in a health informatics context now.
We represent two very distinct Master’s in Health Informatics Programs. University of Illinois at Chicago (UIC) is an all-online, asynchronous program founded in 1999 and has been completely online since 2010. Many of our students are returning to school after time spent working, thus they are older than traditional graduate students, and many already possess significant experience in healthcare, business, or technical fields. The UIC program has an overall focus on population-level social informatics and offers students courses and specialization options in data science, mobile health, and leadership.
University of San Francisco (USF) is a hybrid program. Students spend over half their time in-person on campus and the remainder of their time online. This relatively new program was founded ten years ago. Many students are recent bachelor’s degree recipients with backgrounds in public health, business, health professions, and technology. The USF program focuses on data analytics, digital health, and clinical informatics, as well as offering course options in public health informatics, clinical leadership, and nursing informatics.
In our endeavor to rapidly infuse AI into our curricula, we focused on four knowledge domains of AI Competencies for health informatics education: Essentials of AI; Applications of AI to Health Informatics; AI Transformations to Information and Knowledge; and Organizational Change and Adoption of AI Within the Healthcare Organization. We based these categories on prior work done in the development of a health data science concentration within a health informatics curriculum [6]. Within each of these AI competency categories, we listed knowledge, skills, and attitudes we believed our programs would need to provide for our students, leaving room for future ideas. Our new work builds on the previous work incorporating Gen AI skills and competencies into the same Knowledge Domains. The skills and competencies enabled us to develop Gen AI assignments using a backward design approach, beginning by laying out what we wanted our students to learn and then using that information to create assignments to help them learn it [7].
During the fall 2024 semester each program incorporated a Gen AI assignment into a course, assessing each student’s preliminary knowledge of AI, their knowledge of AI at the conclusion of the assignment, and then each student’s reflections on AI. UIC students participated in this study during BHIS 593, their Capstone experience, the culminating course in the Master’s in Health Informatics. This course requires students to research a topic of interest over the 16-week semester, delivering a paper or project at the end. The goal is for students to synthesize their learnings from their degree program and demonstrate competency as health informaticians.
USF students participated in this study during HS 633, Exploring Gen AI Ethics: Intersection of Education and Health Ecosystems, which they take approximately half-way through their master’s program. This course, a hands-on workshop, incorporates the latest AI literature and tools and focuses on Gen AI, AI ethics concepts, AI applications, use cases, frameworks, and AI policies with special emphasis on healthcare. The goal is to equip students with foundational knowledge and practical skills in Gen AI, preparing them to navigate the Silicon Valley tech ecosystem and the traditional healthcare landscape. USF's GenAI Ethics course (HS 633) was conceived not as a technical deep dive into Gen AI models and methods but as a framework‑building course that foregrounds ethics and governance. This course was offered for the first time in Fall 2024 to coincide with this research study.
Our research questions for this multi-site study were:
Did students learn (develop knowledge) about Gen AI by doing the assignments?
Did students say they developed Gen AI skills and professional attitudes by completing the assignments? Methods: Methods - Design, Setting, Participant Recruitment
This was a multisite study of assessment of assignments completed by Master of Health Informatics students at UIC and USF campuses in the fall of 2024. Across both sites, a total of 18 students participated in the research.
UIC
At UIC the research was implemented in the online Health Informatics curricula for Biomedical Health Information Sciences in the Capstone course required to complete the MSHI. There were eleven participants from UIC (N=11). Students chose from 4 areas of Gen AI practice and developed their specific topics working with faculty to define the scope and deliverables for their project. Topics explored real-world questions of interest for which students developed a blueprint / prototype solution or a use case. Table 1 contains information about the four topic areas. The institutional review board of the University of Illinois Chicago approved this study under the exempt research determination.
Table 1. Gen AI topics
Topics
Focusing Questions
Description
Clinical uses
How is Gen AI being used to augment provider and clinician workflows?
What is the potential for Gen AI to assist in improving health outcomes?
The information needed to make medical decisions (e.g., medical history, laboratory and imaging results, unstructured clinical notes) can be scattered across multiple records that exist in myriad formats and locations. Gen AI could be used to compile and organize this information—and put it into a format that is accessible and clinician-friendly—to accelerate and augment critical thinking. In addition, Gen AI-enabled ambient documentation could pull information from clinician conversations and generate natural-sounding notes. Technology could also be trained to identify patterns that are too subtle for a human to recognize.
Patients/ Consumers
What is the consumer or patient perspective on using Gen AI for healthcare?
In what ways can Gen AI improve the patient experience / patient engagement to manage chronic conditions?
Accurate real-time audio and text messages could be generated instantly, and in different languages, as frontline workers interact with people for health care, and social services. Gen AI also could translate documents, websites, laws, regulations, and policies. Health advisories could make essential information accessible to a diverse population. Gen AI could also play a central role in optimizing and mitigating health and safety risks by generating worksite-specific safety training that replicates real-world settings and critical scenarios.
Frontline/ first responders and public health, community social services use of AI?
How might Gen AI be used to improve patient engagement (from the public health perspective)?
In what ways can Gen AI be used to streamline emergency response in the field, urgent care or the hospital emergency department?
Operational inefficiencies or limited capacity in the call center can translate to decreased customer satisfaction. Gen AI could help to create hyper-personalized experiences with customers and patients. It could also help efficiently support customers and reduce call volume handled by associates. The technology might also assist human staff in generating responses to customer questions, insurance coverage and other plan details. The customer service experience can have a direct impact on patient perception, even without any change in charged costs or appointment wait times.
Ethical Use of Gen AI
How can organizations create an ethical framework for Gen AI in healthcare (consider bias and hallucinations)? How can they use an ethical framework and still support team science and innovation?
What are the data governance policies and steps needed for quality control and validation of a Gen AI model?
Set up an experiment that identifies and examines these issues. Recommendations for how organizations should proceed to build this into governance policies for Gen AI and how it should be considered in building capabilities for using Gen AI to increase AI literacy in the workforce.
All students enrolled in the course were given the opportunity to participate in the research at the beginning of the semester. Participation was voluntary and students that chose not to do the research were still able to complete the course.
All data was collected via the Blackboard Learning Management System. In Blackboard, students completed a pre-test to assess their baseline knowledge of Gen AI. The same questions were given as a post-test at the end of the course to assess student learning. Additionally, students completed self-reflections about skills developed and their attitudes toward using Gen AI and how they thought it might impact their health informatics work and careers. The reflections were also completed at the beginning and at the end of the semester (see Table 2).
Table 2. Student reflections
Reflection
Questions
Reflection 1
As you look to the future, to what degree do you think Gen AI will be useful to your work in health informatics? To what degree do you think it might become part of your work process?
What Gen AI skills do you think will be most valuable to you in your health informatics career? To what extent do you think Gen AI has the potential to enhance your ability to do health informatics work or enhance your productivity?
What concerns do you have about using Gen AI in your professional work? Are there aspects of Gen AI that you anticipate might be problematic? What do you see as some of the drawbacks or challenges of Gen AI use?
Reflection 2
Describe how did using Gen AI for your capstone impacted your satisfaction with the work you produced. Do you think you were more or less satisfied with your results than you would have been without using Gen AI?
Did you experience disappointment or worry/anxiety about using Gen AI to augment your capstone project? Describe any way in which using Gen AI was disappointing or led to worry for you. Did this change over the course of the semester?
What impact did using Gen AI have on your critical thinking as you worked on your capstone? Did it enhance your critical thinking? Did you experience reduced critical thinking due to overreliance on AI technology?
Overall, what was the best thing (the thing you enjoyed the most) about using Gen AI for your capstone project? What was the worst thing (thing you enjoyed the least) about using Gen AI for your capstone?
Please add any additional comments you would like to make about your learning experience and the use of Gen AI.
For their projects, students completed foundational readings. They conducted additional research exploring their chosen topic and developed prompts to use Gen AI to assist in their research, brainstorm, and synthesize information. Student projects were submitted at defined checkpoints during the semester and faculty provided feedback to guide the projects’ development.
Knowledge was assessed using multiple choice and short answer tests. A pre-test was given at the start of the semester, and an identical post-test was completed at the end of the semester. The tests were open book, administered on a learning management system with unlimited attempts. However, students were advised that these tests had no bearing on their grade for the course but were only for research purposes to get a baseline of what students know about Gen AI.
An important part of this study was to identify skills that students will need in the workforce. AI skills and attitudes were identified using student reflections on student engagement topics exploring how they would use Gen AI in their future career, concerns with using Gen AI, and their satisfaction with the end product for their courses [8]( At UIC reflections were given at the beginning and end of the semester.
USF
The USF participants for this study are seven (N=7) out of eight students who took the Gen AI Ethics course in the fall of 2024. A student absent from the initial survey distribution at the beginning of the fall 2024 semester was excluded from this study. The participants are in their second year of the master's degree program in digital health informatics. USF students were introduced in class to the UIC+USF study and explained the purpose, inclusion criteria, privacy, and harm risk per the IRB-approved documentation. The two components of the survey (pretest / posttest and reflections) were explained to students. Students were told that participation was voluntary and would not affect their grades.
The initial USF test (pre) was distributed during in-person class time. Students were given ample time, up to one hour to complete their surveys. Surveys identical to those used at UIC were delivered using the Google Survey application. The USF results were de-identified and remained in Google Cloud until the end of the semester. At the end of the semester, a second survey was administered via Google. This survey had two sections. The first section was identical to the pre-test at the beginning of the semester. The additional second section was for the reflections of students who completed Gen AI semester-long projects. USFs IRB approved the second test before distribution to the study participants.
The second USF survey, like the first, was distributed during in-person class time, their last before students presented their final semester-long Gen AI Ethics projects. Participants were given ample time, up to one hour in class to answer and reflect on their semester. The USF participants in the second survey were seven (N=7) out of eight students who took the Gen AI Ethics course in the fall of 2024. The student absent from the initial study was excluded from the second survey for consistency. Once USF students completed the second survey, de-identified survey results were uploaded to UIC’s secure Box system for analysis.
The UIC Capstone experience is a culmination of learning throughout the MSHI program. In their capstone, students personalize their project by pursuing an area of interest to them. This has the advantage of incorporating real-world scenarios and teaches critical divergent thinking and active learning. The USF Gen AI ethics course equips students with foundational AI knowledge that they can then bring into their work at the intersection of technology and healthcare. In both programs, assignment instructions included use of Gen AI to support the creation of a final product.
Applying backward design principles
Our approach applied backwards design principles to develop learning assignments that explore the use of Gen AI in real-world practical examples. In this method of instructional design, an instructor determines desired results first, then identifies the evidence required to show that learners have attained those results. The instructor then develops learning activities that will provide that evidence [7]. To develop our Gen AI assignments, we first examined knowledge domains for health informatics education used in previous work by these authors to create a data science concentration within the MSHI [6] and adapted them for AI competencies:
Essentials of AI
Application of AI to Health Informatics
AI Transformations to Information and Knowledge
Organizational Change and Adoption of AI within the Healthcare Organization
A preliminary list of 33 competencies was developed based on our professional experience, recommendations in published articles, and job requirements actually posted. Thinking of these competencies as the desired outcomes of students’ Gen AI experiences in our courses, we then used these categories and competencies as guidelines for working backwards to write our pre-test, post-test, and reflection questions, endeavoring to guide and assess students’ development of competency in these areas.
Main measures
We collected data about developing student competencies in Gen AI: knowledge, skills, and professional attitudes.
Research Question 1: Did students learn (develop knowledge) about Gen AI by doing the assignments?
Knowledge – Students completed a pre and posttest survey with 23 items each worth 1 point. Final scores for pretest and posttest were compared to see if students demonstrated higher scores on the posttest.
Research Question 2: Did students say they developed Gen AI skills and professional attitudes by completing the assignments?
Skills and professional attitudes – student reflection data
Categorization of student responses was completed by 2 faculty from UIC and 1 from USF. Each student response was evaluated to select the one primary element it contained and then was assigned to one competency category. Responses that were not related to the subject received no assignment, (ex: “thank you”, “happy I took the class”). Differences in faculty categorization were examined together in a work session to develop a consensus. To help us come to a consensus we used some basic rules, i.e. reflections related to ethics or accuracy were assigned to the AI Transformations to Information and Knowledge domain; and for longer student responses with multiple perspectives, raters assigned the knowledge domain that they agreed was the predominant domain. Results: Results
Pre/post-test - quantitative
Students completed a pre-test survey of 23 multiple choice and short answer questions. Students demonstrated an improvement in knowledge from pre-test to post-test in both courses with UIC students improving from 81% to 93% at the end of the capstone course. USF students also improved from 77% to 80% by the end of the Gen AI Ethics course.
Table 3. Knowledge assessment
Site
Pre-test
Post-test
UIC
81% (n=10)
93% (n=8)
USF
77% (n=7)
80% (n=7) Conclusions: Proliferation of Gen AI and digital health education in the health professions curricula is in the early stages and is poised to grow into a large-scale academic debate about implementation, with consensus on the need to embed elements of artificial intelligence expected from professional societies and graduate education commissions. Therefore, the need to define AI competencies and include them among educational outcomes stretches beyond informatics programs, expanding into the existing health professions [19] as well as emerging specialties such as Master of Digital Health [20]. There are ongoing attempts to define a clear set of skills to be taught in such digital health programs, with many of the grassroots efforts originating from experiences of the faculty [19]. By beginning with the competencies and using backward design to develop learning experiences based on them [7], we innovatively addressed a gap in Gen AI education in a health informatics program, tested two distinct cohorts of students in both online and hybrid content delivery programs, and share our experiences and recommendations for rapidly infusing Gen AI into curriculum. These recommendations are generalizable to a wide variety of informatics and health professions education programs, to address the growing need for academia to update programs with the latest technology developments in clinical care and life sciences. Clinical Trial: N/A
The aging population presents a pressing challenge for healthcare systems, compelling effective strategies to address the complex needs of older adults. The Department of Veterans Affairs (VA) has emb...
The aging population presents a pressing challenge for healthcare systems, compelling effective strategies to address the complex needs of older adults. The Department of Veterans Affairs (VA) has embraced the Age-Friendly Health Systems (AFHS) initiative from the Institute for Healthcare Improvement (IHI) to ensure safe and high-quality care for older Veterans through its Whole Health initiative. As an Age-Friendly Health System, healthcare providers consistently utilize the evidence-based "4Ms": What Matters, Medication, Mentation, and Mobility, to deliver comprehensive care for older adults in all care settings. This manuscript explores the potential of artificial intelligence (AI) in enhancing the evidence-based implementation of the Age-Friendly Health Systems (AFHS) 4Ms framework to provide optimal care for older adults. By leveraging AI technologies, such as natural language processing, machine learning, and data analytics, this manuscript delves into the opportunities and challenges in utilizing AI to support the 4Ms domains – what matters, medication, mentation, and mobility. Furthermore, it discusses the potential benefits of integrating AI-driven decision support systems and predictive analytics to personalize care, reduce polypharmacy and potentially inappropriate medications, enhance cognitive and mood assessments, and better identify mobility issues and interventions. By examining the intersection of AI and age-friendly care, this manuscript contributes to the existing literature by highlighting the transformative potential of AI in improving outcomes and the experiences for older adults across diverse healthcare settings.
Background: Artificial intelligence (AI) has the potential to optimize neurological nursing cares by enhancing caring support, improving patient monitoring, enabling early intervention, and personaliz...
Background: Artificial intelligence (AI) has the potential to optimize neurological nursing cares by enhancing caring support, improving patient monitoring, enabling early intervention, and personalizing care for patients with neurological conditions. Objective: This narrative review aimed to analyze studies on the convergence of AI and neurological nursing cares. Methods: Relevant data bases such as including PubMed, Scopus, and Science Direct and Google scholar were searched from 2015 up to 2024; 15 studies were finally selected from a total of 733 founded studies and outcomes were extracted. Studies were selected based on their relevance to AI applications in diagnostic support, patient monitoring, treatment planning and Advances in AI ethics with a focus on neuroscience nursing practice. Results: The review identified key domain where AI can support nurses: 1] AI enhances diagnostic support through advanced imaging and data analysis techniques, 2) AI-driven monitoring tools facilitate early intervention by predicting adverse events, 3) AI models aid in personalized care, optimizing treatment plans for patients with neurodegenerative conditions, and 4) challenges include technological limitations, ethical concerns, and a need for nurse education. Conclusions: Although AI is improving nursing practice in neurological fields, successful integration requires addressing barriers such as infrastructure limitations, data privacy issues, and workforce readiness. Clinical Trial: Not Applicable
Background: Background: Pre-capillary pulmonary hypertension is a progressive, incurable disease marked by high morbidity, frequent emergency department visits, and persistently poor survival despite...
Background: Background: Pre-capillary pulmonary hypertension is a progressive, incurable disease marked by high morbidity, frequent emergency department visits, and persistently poor survival despite targeted therapies. Web-based symptom monitoring programs offer a promising, non-invasive means to support self-management and enable early detection of decompensation. Objective: Objective: We aim to evaluate the web-based symptom monitoring programs on reducing symptom interference, enhancing physical activity, improving key physiological measures (echocardiographic parameters and NT-proBNP), and decreasing emergency department readmissions compared with usual care in patients with pre-capillary PH. Methods: Methods: This parallel-group, single-blind, randomized controlled trial recruited patients with pulmonary arterial hypertension (PAH) and chronic thromboembolic pulmonary hypertension (CTEPH) of pre-capillary PH (mean pulmonary arterial pressure > 20mmHg) via right heart catheterization from one cardiology outpatient department in northern Taiwan. Participants (N = 51) were randomized into an intervention group (n = 26) or control group (n = 25). The intervention group received a 9-month symptom monitoring program delivered via a web-based application; the control group received usual standard care. Outcome measurements included changes from baseline in the symptom interference, 6-minute walk test (6MWT) and changes in readmissions to the emergency department (ED). Data were collected at baseline (enrollment), 3-months, 6-months and 9-months following enrollment. Data were analyzed using generalized estimating equations (GEEs). Results: Results: The mean age of participants was 59.6 years (SD=13.6). Most were diagnosed with connective tissue disease-associated PAH (39.2%), with the mean duration of PH since diagnosis of 3.38 years (SD = 2.55). There were no significant differences in characteristics between groups. Compared with the control group, the intervention resulted in a greater reduction in participants’ symptom interference; the distance traveled on the 6MWT was greater for the intervention group, with an average improvement of 9.8 meters every three months (β= 9.81, p= 0.03). Additionally, the likelihood of ER readmissions decreased over the 9-month study period (β= -1.03; OR= 0.35, p= 0.04); GEE analysis indicated readmissions were reduced by 65%. Conclusions: Conclusions: A web-based symptom monitoring program is a feasible intervention for effectively reducing symptom interference, improving physical activity in patients with pre-capillary PH, while also reducing rates of ED readmissions. Clinical Trial: Trial Registration: ClinicalTrials.gov; No. NCT05908019
Background: Disease name recognition is a fundamental task in clinical natural language processing (NLP), enabling the extraction of critical patient information from electronic health records (EHRs)....
Background: Disease name recognition is a fundamental task in clinical natural language processing (NLP), enabling the extraction of critical patient information from electronic health records (EHRs). While recent advances in large language models (LLMs) have shown promise, most evaluations have focused on English, and little is known about their robustness in low-resource languages such as Japanese. In particular, whether these models can perform reliably on previously unseen in-hospital data, which differs in writing styles and clinical cases from training data, has not been thoroughly investigated. Objective: This study evaluates the robustness of fine-tuned LLMs for disease name recognition in Japanese clinical notes, with a particular focus on their performance on in-hospital data that was unseen during training. Methods: We used two corpora for this study: (1) a publicly available set of Japanese case reports denoted by CR, and (2) a newly constructed corpus of progress notes denoted by PN, which are written by ten physicians to capture stylistic variations of in-hospital clinical notes. To reflect real-world deployment scenarios, we first fine-tuned models on CR. Specifically, we compare an LLM and a baseline masked language model (MLM). The models were then evaluated under two conditions: (1) on CR, representing the in-domain (ID) setting with the same document type as in training, and (2) on PN, representing the out-of-domain (OOD) setting with a different document type. Robustness was assessed by calculating the performance gap—the performance drop from ID to OOD setting. Results: The LLM demonstrated greater robustness, with a smaller performance gap in F1 scores (ID–OOD = −8.6) compared to the MLM baseline (ID–OOD = −13.9). This indicates more stable performance across ID and OOD settings, highlighting the effectiveness of fine-tuned LLM for reliable use in diverse clinical settings. Conclusions: Fine-tuned LLMs demonstrate superior robustness for disease name recognition in Japanese clinical notes with a smaller performance gap. These findings highlight the potential of LLMs as reliable tools for clinical NLP in low-resource language settings and support their deployment in real-world healthcare applications where documentation diversity is inevitable. Clinical Trial: None
Background: The ApoB rs676210 polymorphism has been associated with altered lipid metabolism and cardiovascular risk in various populations; however, data from Vietnamese populations remain limited. O...
Background: The ApoB rs676210 polymorphism has been associated with altered lipid metabolism and cardiovascular risk in various populations; however, data from Vietnamese populations remain limited. Objective: This study aimed to investigate the association of the APOB rs676210 variant with lipid profiles among Vietnamese individuals newly diagnosed with elevated LDL-C. Methods: This cross-sectional study enrolled 69 Vietnamese adults with newly diagnosed elevated LDL-C (≥130 mg/dL). Genotyping for ApoB rs676210 was performed using allele-specific real-time PCR. Lipid parameters, including LDL-C, HDL-C, non-HDL-C, and ApoB, were measured and compared across genotypes and alleles. The data were analyzed using R environment. Results: Participants with the AA genotype exhibited significantly higher LDL-C (5.19 ± 0.95 mmol/L vs. 4.37 ± 0.97 mmol/L, P<.001), non-HDL-C (5.94 ± 1.08 mmol/L vs. 5.31 ± 1.22 mmol/L, P =.03), and lower HDL-C (1.26 ± 0.31 mmol/L vs. 1.44 ± 0.39 mmol/L, P =.03), as well as higher ApoB levels (149.5 ± 26.3 mg/dL vs. 136.92 ± 15.21 mg/dL, P =.02), compared to GA/GG carriers. Also, allele-based analysis revealed that carriers of allele A also had higher LDL-C, non-HDL-C, HDL-C, and ApoB levels than G allele carriers (all P <.05). Conclusions: The ApoB rs676210 polymorphism is associated with significant variations in lipid profiles among Vietnamese patients with elevated LDL-C, potentially serving as a genetic marker for lipid disorder screening and individualized management.
Background: People experiencing gambling problems often struggle to adhere to their intention to reduce time and money spent gambling. While many techniques exist to reduce gambling harm, consistently...
Background: People experiencing gambling problems often struggle to adhere to their intention to reduce time and money spent gambling. While many techniques exist to reduce gambling harm, consistently applying them across settings and at the right time remains challenging. Providing personalised, real-time support could enhance behaviour change efforts. Objective: This study evaluated a Just-In-Time Adaptive Intervention (JITAI) to help individuals adhere to gambling limits. Drawing on the Health Action Process Approach and Self-Determination Theory, the primary aim was to assess the effect of action and coping planning versus no intervention on adherence to expenditure limits. The primary proximal outcome was goal adherence, defined as unplanned expenditure (≥10% over planned expenditure per day). Secondary outcomes included intention strength, goal self-efficacy, and urge self-efficacy, all measured continuously. Methods: We conducted a fully automated and blinded micro-randomised trial (MRT) with 50:50 randomisation and a 6-month within-group follow-up. Participants were recruited online; eligibility included residing in Australia, enabling notifications, and seeking gambling support. The Gambling Habit Hacker smartphone app delivered tailored behaviour change techniques, including goal setting, action and coping planning, and self-monitoring. The MRT randomised 174 participants to test whether the app provided in-the-moment support for adhering to limits. Participants set personal expenditure goals and completed three Ecological Momentary Assessments (EMAs) daily for 28 days, tracking adherence, intention strength, self-efficacy, and high-risk situations. At each EMA, participants needing support were micro-randomised to receive action/coping planning with support or a control condition involving selection of a self-enactable strategy without support. Results: Of 238 enrolled participants, 174 completed at least one EMA. Most were male (68%) and reported moderate or mild gambling severity (52%). An intervention was delivered at least once to most participants (n = 140, 80%). Receiving an intervention did not increase the probability of adherence compared to no intervention. In contrast, supplementary analyses in which findings from the EMAs were collapsed across each day revealed the intervention was associated with lower rates of unplanned gambling expenditure when compared to the control condition. Within-group follow-up showed a large reduction in monthly expenditure (from $2,700 to just over $260) and gambling frequency (from 8–9 to 1–2 sessions) at six months. Significant improvements with small-to-large effect sizes were also observed at post-treatment and maintained at follow-up for gambling severity (dz = -0.91), self-efficacy (dz = -0.42), psychological distress (dz = -0.52), and well-being (dz = 0.70). Conclusions: Gambling Habit Hacker showed strong overall effects over time but no significant difference in adherence between intervention and control conditions. Given the strong effect over time, future studies should explore an optimised version of the app that is subject to a randomised controlled study design. Clinical Trial: This trial has been registered with the Australian New Zealand Clinical Trials Registry (ACTRN12622000497707) and was approved by the Deakin University Human Research Ethics Committee (2020-304).
Background: Wearable activity monitors offer clinicians and researchers accessible, scalable, and cost-effective tools for continuous remote monitoring of functional status. These technologies can com...
Background: Wearable activity monitors offer clinicians and researchers accessible, scalable, and cost-effective tools for continuous remote monitoring of functional status. These technologies can complement traditional clinical outcome measures by providing detailed, minute-by-minute remotely collected data on a wide array of biometrics that include, as examples, physical activity and heart rate. There is significant potential for the use of these devices in rehabilitation after stroke if individuals will wear and use the devices; however, the acceptance of these devices by persons with stroke is not well understood. Objective: In this study, we investigated the participant-reported acceptance of a commercially available, wrist-worn wearable activity monitor (the Fitbit Inspire 2) for remote monitoring of physical activity and heart rate in persons with stroke. We also assessed relationships between reported acceptance and adherence to wearing the device. Methods: Sixty-five participants with stroke wore a Fitbit Inspire 2 for three months, at which point we assessed the acceptance of wearing the device using the Technology Acceptance Questionnaire (TAQ) inclusive of its seven dimensions (Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Equipment Characteristics (EC), Privacy Concern (PC), Perceived Risk (PR), Facilitating Conditions (FC), and Subjective Norm (SN). We then performed Spearman’s correlations to assess relationships between acceptance and adherence to device wear, which we calculated as both the percentage of daily wear time and percentage of valid days the device was worn during the three weeks preceding TAQ administration. Results: Most participants reported generally agreeable responses with high overall total TAQ scores across all seven dimensions that indicated strong acceptance of the device; “Agree” was the median response to 29 of the 31 TAQ statements. Participants generally found the device beneficial for their health, efficient for monitoring, easy to use and don/doff, and unintrusive to daily life. However, participant responses on the TAQ did not show significant positive correlations with measures of actual device wear time (all p>0.05). Conclusions: This study demonstrates generally high self-reported acceptance of the Fitbit Inspire 2 among persons with stroke. Participants reported general agreement across all seven TAQ dimensions with minimal concerns interpreted as being directly relatable to post-stroke motor impairment (e.g., donning and doffing devices, using the device independently). However, the high self-reported acceptance scores did not correlate positively with measures of real-world device wear. Accordingly, it should not be assumed that persons with stroke will adhere to wearing these devices simply because they report high acceptability.
Background: Hepatitis B Virus (HBV) remains a major public health concern, particularly among vulnerable populations such as pregnant women. This study aimed to investigate the correlation between HBV...
Background: Hepatitis B Virus (HBV) remains a major public health concern, particularly among vulnerable populations such as pregnant women. This study aimed to investigate the correlation between HBV prevalence and vaccination coverage among pregnant women in Bauchi State, Nigeria. A cross-sectional survey of 150 participants was conducted to assess socio-demographic factors, vaccination status, and health history related to HBV. The results showed that 57.5% of participants reported being vaccinated, with only 25.5% completing the full vaccination series. HBV prevalence was recorded at 21.38%, while 13.7% of participants were unsure of their HBV status. Chi-square analysis revealed no significant correlation between vaccination coverage and HBV diagnosis (p = 0.586). Logistic regression identified occupation and prior testing as significant factors influencing vaccination completion (p < 0.05). Key barriers to vaccination included lack of awareness (17.0%) and forgetfulness (14.4%). Despite moderate vaccination coverage, a substantial portion of participants had not completed the full vaccination series, undermining efforts to achieve full immunization and control HBV spread. The findings highlight the need for improved public health interventions, including educational campaigns to raise awareness and increase vaccination completion rates among pregnant women in the region. Further research is needed to explore additional barriers to vaccination and optimize strategies for preventing HBV transmission in high-risk populations. Objective: This study aimed to investigate the correlation between HBV prevalence and vaccination coverage among pregnant women in Bauchi State, Nigeria. Methods: A cross-sectional survey of 150 participants was conducted to assess socio-demographic factors, vaccination status, and health history related to HBV. Results: The results showed that 57.5% of participants reported being vaccinated, with only 25.5% completing the full vaccination series. HBV prevalence was recorded at 21.38%, while 13.7% of participants were unsure of their HBV status. Chi-square analysis revealed no significant correlation between vaccination coverage and HBV diagnosis (p = 0.586). Logistic regression identified occupation and prior testing as significant factors influencing vaccination completion (p < 0.05). Key barriers to vaccination included lack of awareness (17.0%) and forgetfulness (14.4%). Despite moderate vaccination coverage, a substantial portion of participants had not completed the full vaccination series, undermining efforts to achieve full immunization and control HBV spread. Conclusions: The findings highlight the need for improved public health interventions, including educational campaigns to raise awareness and increase vaccination completion rates among pregnant women in the region.
Background: First responders play crucial roles for protecting citizens and communities from various hazards. Due to the high-stress nature of their work, first responders suffer significant mental he...
Background: First responders play crucial roles for protecting citizens and communities from various hazards. Due to the high-stress nature of their work, first responders suffer significant mental health issues. Existing mental health interventions, albeit their benefits, do not target cognitive processing of traumatic events such as memory and emotion. Objective: As a novel attempt using immersive virtual reality, the current work aims to examine effects of a semantically irrelevant virtual reality (SIVR) content to intervene in the retrieval of an adverse event memory and associated emotion. Methods: A total of 107 participants were recruited in the experiment and randomly assigned to one of three groups: Control Group, Comparison Group, and Intervention Group. In Stage-1, participants in all groups watched a short video of a house fire. In Stage-2, Control Group stayed seated without doing anything. Comparison Group read a text paragraph about Egyptian Ocean, as semantically irrelevant follow-up information. Intervention Group watched a 360° VR video of Egyptian Ocean. Positive And Negative Affect Schedule survey was administered each after the two stages. In Stage-3, the memory accuracy of the house fire video was assessed using a forced recognition test of 15 pairs of a true image and a fake image, generated by AI software. Results: One-way ANOVA revealed no difference of the memory accuracy between three groups. However, repeated measures ANOVA found that the SIVR experience significantly boosted positive emotion of Intervention Group participants and reduced negative feelings of participants in all groups. Conclusions: Our findings suggest that SIVR serves as a quick and affordable way to address psychological reaction after watching a traumatic event. Future research is required to generate the memory suppression effect of the SIVR content.
Background: Tailoring is an important strategy to improve uptake and efficacy of medical information and guidance provided through eHealth interventions. Given the rapid expansion of eHealth, understa...
Background: Tailoring is an important strategy to improve uptake and efficacy of medical information and guidance provided through eHealth interventions. Given the rapid expansion of eHealth, understanding the design rationale of such tailored interventions is vital for further development of and research into eHealth interventions aimed at improving health and healthy behaviour. Objective: This systematic review examines the use of health literacy concepts through tailoring strategies in digital health interventions (eHealth) aimed at improving health and how these elements inform the overall design rationale. Methods: A systematic search of PubMed, PsycINFO, Web of Science, and ACM databases yielded 31 eligible randomized trials that focused on adult health improvement through eHealth interventions. Eligible studies compared tailored versus non-tailored eHealth interventions for adults, excluding non-English papers and those addressing solely readability or targeting populations with accessibility barriers. Data extraction focused on study characteristics, health literacy components, tailoring methods, and design rationales, with study quality evaluated using QuADS by independent reviewers. Results: Most interventions applied both cognitive and social health literacy concepts and predominantly used content matching as a tailoring strategy. Of all studies using content matching, most used one or more supporting theories as well as end user data to inform the content matching. While choices for individual intervention components were mostly explicated, detailed descriptions of the design process were scarce, with only a few studies articulating an underlying narrative that integrated the most important chosen components. Conclusions: While tailored eHealth interventions demonstrate promise in enhancing health literacy and trial design of the interventions overall was of good quality, inconsistent documentation of design rationales impedes replicability and broader application of used eHealth concepts. This calls for more detailed reporting on the design choices of the intervention in efficacy studies, so that reported outcomes can be easier connected to choices made in the design of the eHealth intervention. Clinical Trial: The review was conducted in accordance with PRISMA guidelines and registered with PROSPERO (225731) and was primarily funded through internal resources at UMC Groningen.
Background: The persistence of Enterococcus faecalis is a significant issue in endodontic therapy, frequently leading to treatment failures. Its capacity to live under extreme conditions, penetrate de...
Background: The persistence of Enterococcus faecalis is a significant issue in endodontic therapy, frequently leading to treatment failures. Its capacity to live under extreme conditions, penetrate dentinal tubules, and create resistant biofilms makes conventional antibacterial methods insufficient. Therefore, enhancing the efficacy of intracanal irrigants through advanced activation methods has become crucial for successful root canal disinfection .Despite the proven antimicrobial activity of metronidazole and chlorhexidine, the resilience of Enterococcus faecalis necessitates integrating activation techniques that improve irrigant penetration and disrupt biofilms. Laser and sonic activation methods show promise in enhancing the antibacterial performance of irrigants. However, limited studies exist comparing their effects on metronidazole, chlorhexidine, and saline. Hence, there is a need to evaluate the antibacterial efficacy of these irrigants when used in conjunction with laser and sonic activation techniques. Objective: To compare the antibacterial efficacy of metronidazole, chlorhexidine, and normal saline as intracanal irrigants when activated by laser and sonic techniques against Enterococcus faecalis in extracted human mandibular premolars Methods: Ninety freshly extracted single-rooted mandibular premolars will be obtained, decoronated, and biomechanically prepared using the ProTaper Universal rotary system up to F2. Canals will be inoculated with Enterococcus faecalis and incubated for 7 days to allow biofilm formation. The teeth will then be randomly divided into three main groups (metronidazole, chlorhexidine, and saline), each subdivided based on the activation method used (laser or sonic). Irrigation protocols will be standardized, and pre- and post-irrigation bacterial samples will be collected on paper points and cultured on Brain Heart Infusion (BHI) agar. Colony-forming units (CFUs) will be counted to evaluate antibacterial efficacy. Results: Combining laser activation with metronidazole and chlorhexidine is anticipated to demonstrate superior antibacterial efficacy against E. faecalis compared to sonic activation and saline. The study is expected to reveal the most effective irrigant-activation combination for enhanced disinfection Conclusions: This study aims to provide evidence-based insights into optimizing root canal disinfection protocols by evaluating and comparing the synergistic effects of antimicrobial agents and advanced activation techniques. The findings could contribute significantly to improving endodontic treatment outcomes and reducing the incidence of persistent infections caused by E. faecalis. Clinical Trial: This study does not require registration in the Clinical Trials Registry of India (CTRI) as it is a laboratory-based in vitro experimental study involving extracted human teeth, without any interventions on living human participants or clinical outcomes.
Background: Bladder cancer is a disease with complex perturbations in gene networks and heterogeneous in terms of histology, mutations, and prognosis. Advances in high-throughput sequencing technologi...
Background: Bladder cancer is a disease with complex perturbations in gene networks and heterogeneous in terms of histology, mutations, and prognosis. Advances in high-throughput sequencing technologies, genome-wide association studies, and bioinformatics methods have revealed greater insights into the pathogenesis of complex diseases. Network biology-based approaches have been used to demonstrate the complex physical or functional interactions between molecules which can lead to potential drug targets. Objective: There is a need to better understand gene networks and protein-protein interactions (PPI) specific to urothelial carcinoma. Methods: We performed a multi-sample PPI study comparing two urothelial carcinoma architectures: papillary and non-papillary. We used a novel PPI analysis tool, Proteinarium to identify clusters of patients with shared PPI networks in each architecture. The feature of this tool is to analyze the PPI networks of patients and visualize them in clusters based on their network similarities from any genomic data including Next Generation Sequencing (NGS). Results: We observed distinct networks for the papillary and non-papillary groups. Proteins unique to the papillary urothelial carcinoma detected in two separate datasets included UBA52, RPS27A, UBR4, CUL1, UBE2K, and CDC5L. Proteins found in the non-papillary urothelial carcinoma specific PPI network were GNB1, UBC, RHOA, FPR2, GNGT1, PIK3CA, PIK3CG, HSP90AA1, SLC11A1, CCT7, ARHGEF1, PAK1, PAK2, PSMA7, and TRIO. Conclusions: We identified distinct PPI networks specific to papillary and non-papillary urothelial carcinomas presenting unique molecular entities. Clinical Trial: N/A
Background: Despite widespread COVID-19 vaccination, breakthrough infections remain a public health concern, with transmission risks potentially linked to community behaviors and age-specific preventi...
Background: Despite widespread COVID-19 vaccination, breakthrough infections remain a public health concern, with transmission risks potentially linked to community behaviors and age-specific preventive practices. While mask-wearing and social distancing are well-established mitigation strategies, their adoption patterns across age groups, particularly among vaccinated individuals, are poorly understood. Objective: This study focuses on understanding breakthrough infections among vaccinated individuals, high-risk behaviors and socioeconomic determinants of COVID-19 susceptibility to guide effective public health interventions. Methods: A 31-question voluntary survey was distributed using convenience sampling through the Qualtrics survey platform. Log-binomial regression model was used to estimate the Relative Risk (RR) to measure the association between testing COVID positive and the different activities. Results: Among the vaccinated individuals, those who tested positive were 11.103 times more likely to engage in going to a restaurant or bar compared to those who tested negative (p=0.010). There was a significant difference in practicing social distancing and mask wearing between the different age groups (p=0.015) with 100% of the participants above 70 years old practicing it followed by 96.8% of the 18-29 years old. The study found lower infection rates in the same age groups compared to the other age groups. Moreover, the 18-29 age group demonstrated notable associations with practising social distancing and mask-wearing in various settings. Conclusions: Compliance with social distancing and mask-wearing was higher among older and younger participants, and non-compliance with social distancing and mask wearing was associated with a higher positivity rate. Activities like going to a restaurant or bar was significantly associated with testing COVID-19 positive in vaccinated individuals.
Background: Internet hospitals and Internet + nursing service have recently emerged as new medical and nursing care models, respectively. Both use Internet-based information platforms and combine onli...
Background: Internet hospitals and Internet + nursing service have recently emerged as new medical and nursing care models, respectively. Both use Internet-based information platforms and combine online applications and offline services to provide appropriate services. The rapid growth in the number of Internet hospitals in China has given rise to the Internet hospital + home nursing (IHHN) service model. Research on this new model is limited, and the effectiveness of its implementation remains to be clarified. Objective: We sought to examine the effectiveness of IHHN model implementation by investigating service workload, patients’ satisfaction, and nurses’ perception to provide a strategic reference for IHHN development. Methods: Data from patients who received for IHHN were collected from a hospital database. We analyzed the frequency of patients’ applications and the timeliness of IHHN using descriptive statistics and Chi-squared tests. We used a frequency table to assess the classification of patients’ illnesses, service items, and geographic distribution. Geographical distribution of patients was visualized using spatial mapping techniques. Finally, we compared the cost of transferring patients to hospital by ambulance for nursing services with that of IHHN using a simulation technique and t-test. Patients’ satisfaction at two time periods was compared using a Mann-Whitney U-test. Nurses’ perceptions regarding IHHN were examined using a questionnaire survey. Results: Medical records from 2,459 IHHN patients were examined. Most IHHN patients were over 60 years old (86.21%). The number of IHHN applications differed significantly between age groups (2 = 29.86, P < 0.01). Oncological patients were the most common type of IHHN users (19.80%). Intravenous blood collection was the most common service item (66.07%). IHHN patients were mainly from six regions around the physical hospital (86.17%). All patients were served within 2 days after their appointment. The waiting time length varied significantly with appointment time (2 = 290.88, P < 0.01). The cost of routine and specialized service of IHHN was lower than the cost of transporting patients by ambulance to hospital (t = 53.63, P < 0.001, t = 22.98, P < 0.001). However, there was no significant difference between the costs of IHHN long distance services and transferring patients to hospital (t = 3.08, P = 0.77). Patient satisfaction was consistently high in two time periods, with no significant difference (Mann-Whitney U= 5090149.00, P = 0.38). Nurses’ perceptions were positive. Conclusions: IHHN demonstrates as an effective approach for providing convenient, accessible, and economical home nursing, addressing the shortcomings of online medical and nursing services such as difficulties in accessing medical care for the elderly and mobility-impaired patients. IHHN has high patient satisfaction degree and nurse positive perceptions As population aging progresses, IHHN services may expand. However, it is necessary to optimize services to ensure the safety of patients and nurses, and to provide corresponding policy support.
Background: Increasing adherence to physical activity (PA) guidelines could prevent chronic disease morbidity and mortality, save considerable healthcare costs, and reduce health disparities. We previ...
Background: Increasing adherence to physical activity (PA) guidelines could prevent chronic disease morbidity and mortality, save considerable healthcare costs, and reduce health disparities. We previously established the efficacy and cost-effectiveness of a web-based PA intervention for Latina women, which increased PA but few participants met PA guidelines and long-term maintenance was not examined. A new version with enhanced intervention features was found to outperform the original intervention in long-term guideline adherence. Objective: to determine the costs and cost-effectiveness of the enhanced multi-technology PA intervention vs. the original web-based intervention in increasing minutes of activity and adherence to guidelines Methods: Latina adults (N=195) were randomly assigned to receive a Spanish language individually tailored web-based PA intervention (Original), or the same intervention additional phone calls and interactive text messaging (Enhanced). PA was measured at baseline, 12 months (end of active intervention), and 24 months (end of tapered maintenance) using self-report (7-Day Physical Activity Recall Interview) and ActiGraph accelerometers. Costs were estimated from a payer perspective and included all features needed to deliver the intervention, including staff, materials, and technology. Cost effectiveness was calculated as the cost per additional minute of PA added over the intervention, and the incremental cost effectiveness ratios of each additional person meeting guidelines. Results: at 12 months, the costs of delivering the interventions were $16/person/month and $13/person/month in the Enhanced and Original arms, respectively. These costs fell to $14 and $8 at 24 months. At 12 months, each additional minute of self-reported activity in the Enhanced group cost $0.09 vs. $0.11 in Original ($0.19 vs. $0.16 for ActiGraph), with incremental costs of $0.05 per additional minute in Enhanced beyond Original. At the end of maintenance (24 months), costs per additional minute fell to $0.06 and $0.05 ($0.12 vs. $0.10 for ActiGraph), with incremental costs of $0.08 per additional minute in Enhanced ($0.20 for ActiGraph). Costs of meeting PA guidelines at 12 months were $705 vs. $503 in Enhanced vs. Original, and increased to $812 and $601 at 24 months. The ICER for meeting guidelines at 24 months was $1837 (95% CI $730.89-$2673.89) per additional person in the Enhanced vs. Original arm. Conclusions: As expected, the Enhanced intervention was more expensive, but yielded better long-term maintenance of activity. Both conditions were low costs relative to other medical interventions. The Enhanced intervention may be preferable in high risk populations, where more investment in meeting guidelines could yield more cost savings. Clinical Trial: NCT03491592
In the NHS, as in other health systems, it is generally agreed that difficulties in achieving digital transformation lie less in problems with the technical (hardware and software) aspects of digital...
In the NHS, as in other health systems, it is generally agreed that difficulties in achieving digital transformation lie less in problems with the technical (hardware and software) aspects of digital solutions than the “soft” system issues relating to institutional context, organisational complexity and what are broadly described as “human factors”. A range of approaches have been explored within digital health research to better understand and address the complex series of factors that have given rise to the implementation gap. Focusing on the need to deploy digital health technologies to support the “shift left” (from hospital to community, sickness to prevention, analogue to digital) agenda, this paper explores how a systems engineering approach could provide the cross-disciplinary, holistic framework that is required to address what could be described as a very messy problem. Our framework combines methods such as Digital Twins to simulate complex care pathways with Living Labs that enable interdisciplinary collaboration, co-design, and iterative pilot testing. When combined, these methods could help align interests, integrate end-user needs, embed design for successful implementation and iteratively adapt and improve digital health technologies, as well as offering an evaluation strategy that emphasizes safety, effectiveness and cost-efficiency.
Background: Personas, fictional profiles representing user segments, play an important role in human-centered design, ensuring tools are tailored to the needs of users. Although public health organiza...
Background: Personas, fictional profiles representing user segments, play an important role in human-centered design, ensuring tools are tailored to the needs of users. Although public health organizations often develop information systems to promote population health, human-centered design methods and personas are generally underutilized in public health informatics projects. Objective: This study presents a novel, mixed-methods approach to developing data-driven personas for use in public health information system design, leveraging two statewide surveys conducted in Washington (WA) State. The aim is to produce realistic, representative, and actionable personas that reflect the diversity of a state population and support user-centered design in public health initiatives. Methods: Quantitative (cluster analysis) and qualitative (thematic review and quote extraction) methods were applied to two statewide survey datasets: 1) a statewide Knowledge, Attitudes, and Practices (KAP) survey (N=1,103) which employed random, address-based sampling, and 2) a subset of the KAP respondents (N=143) which included more targeted questions on opinions and preferences related to public health information systems. Characteristics examined included demographics, technological readiness, opinions about public health policies, and experience using online health tools. Results: K-prototype clustering resulted in five clusters. These five clusters were studied using both quantitative and qualitative analysis of key factors of the WA State population to build 13 personas. Each persona represents a different population demographic, varying levels of technological readiness and attitudes toward public health policies, and differing experiences with online health tools. Persona descriptions are further elucidated with a short profile and 2-3 quotes. Conclusions: This study offers a scalable and adaptable framework for persona development in public health, demonstrating how existing datasets can be transformed into effective design tools. Through a mixed-methods approach, personas that reflect the diverse needs, preferences, and behaviors of WA State residents were created. These personas can enhance the design, development, and evaluation of public health information systems by centering user experience. Persona development and the methods described here can be used in future public health informatics projects to assist in formative research, guide design and development, inform usability testing, and shape communication strategies. By bridging the gap between large-scale data and user-centered design, this approach provides a practical model for making public health technologies more aligned with community needs.
Background: Social Media groups (SMG) enable individuals with rare disease to connect with one another, and access instant support and advice. Accelerated diagnoses of GNDs over the last decade have d...
Background: Social Media groups (SMG) enable individuals with rare disease to connect with one another, and access instant support and advice. Accelerated diagnoses of GNDs over the last decade have driven rapid expansion of gene specific SMG membership. Limited literature regarding parental use of SMGs in the context of managing their child’s GND exists. Objective: The objective of our study was to determine and describe how parents use social media and the internet in the context of their child’s genetic neurodevelopmental disorder (GND). Methods: We undertook a mixed methods study within a cohort of children with GNDs (GenROC). 351 Parents provided quantifiable survey responses regarding their use of social media. We also interviewed 17 parents to understand how they use SMGs and regarding their views on the data held within these groups. Results: Our survey found 92% of parents use SMGs related to their child’s genetic disorder and of these almost all are on Facebook. Most SMGs are closed, international, have more than 200 members, are specific to the GND and associated with a corresponding charity or foundation. Most parents could not recall what they had consented to when joining the group with respect to the use of their posted data. Most parents trust the data that is shared but acknowledged the anecdotal nature of it. Parents found the most valuable element of the SMG to be shared lived experience with other families.
Interview data from 17 parents were coded and analysed thematically. Four main themes were identified: 1) SMGs for support and shared lived experience; 2)Possible harms from participation in SMGs; 3)SMG composition, demographics and dynamics; 4)Usefulness and use of data shared within the groups. Conclusions: This mixed method study shows the evolving landscape of SMG use in neurodevelopmental disorders, highlights its benefits and downsides and is widely applicable to all parent SMGs for specific niche medical conditions. Utilising the strength of these groups in a more collaborative approach in the future could prove useful to clinicians, families and researchers alike.
Background: Firearm violence injury is captured via structured data codes that best reflect acute bodily injury. There are no structured data codes to describe secondary exposure (e.g. witnessing a sh...
Background: Firearm violence injury is captured via structured data codes that best reflect acute bodily injury. There are no structured data codes to describe secondary exposure (e.g. witnessing a shooting, being threatened by a firearm, or losing a loved one to gun violence and injury from firearms) even though such exposure is associated with many long and short term health impacts. Clinical chart notes from Electronic Health Records (EHRs) often contain data not otherwise captured in structured data fields and can be categorized using natural language processing (NLP). Objective: The study protocol described here outlines the steps being taken to develop an NLP text classifier for determination of exposure to firearm violence from ambulatory primary care and behavioral health EHR clinical progress notes for persons aged ≥ 5 years. Methods: describe the process for arriving at a novel NLP lexicon, clinical process note selection, the steps for text classifier training and selection, and evaluation of model performance. We also describe involvement of a stakeholder advisory committee in the development of the lexicon, and how the lexicon addresses biases inherent in NLP text classifiers. Results: We describe the development of an NLP lexicon with the input of a stakeholder advisory committee, evaluation of the text classifier, and future plans for utilization of the text classifier. Conclusions: This work describes the development of a novel NLP text classifier to identify exposure to firearm violence in ambulatory primary care and behavioral health clinical progress notes. Clinical Trial: This is an IRB exempt non-intervention data study and was not registered on clinicaltrials.gov
Background: Empty Nose Syndrome (ENS) is a debilitating condition that can occur after partial or total turbinectomy, leading to impaired nasal airflow sensation, breathing difficulties, and sleep dis...
Background: Empty Nose Syndrome (ENS) is a debilitating condition that can occur after partial or total turbinectomy, leading to impaired nasal airflow sensation, breathing difficulties, and sleep disturbances. While ENS is often diagnosed using the ENS6Q questionnaire, its precise causes remain unclear. Some patients with significant turbinate loss develop minor ENS symptoms, whereas others experience severe symptoms after minor mucosal cauterization. Understanding the structural and aerodynamic factors contributing to ENS is crucial for improving diagnosis and prevention. Objective: This study aims to identify correlations between the ENS6Q score and key anatomical and aerodynamic parameters obtained from computational fluid dynamics (CFD) simulations in ENS patients. Methods: We reconstructed patient-specific nasal cavity models from computed tomography (CT) scans and performed CFD simulations. The analysis focused on five key parameters: the remaining turbinate volume, total mucosal surface area, nasal resistance, average cross-sectional area, and airflow imbalance between the two nasal cavities. These parameters were then compared to ENS6Q scores. Results: Preliminary findings suggest that a lower remaining turbinate volume, reduced mucosal surface area are associated with higher ENS6Q scores. Additionally, significant airflow asymmetry between the two nasal cavities appears to correlate with more severe symptoms. Furthermore, our data indicate that individuals with larger nasal cavities and greater preoperative mucosal surface area tend to be more resilient to turbinectomy. For an equivalent amount of turbinate resection, patients with initially smaller nasal cavities thus having less mucosal surface experience more severe ENS symptoms. Conclusions: By quantifying the anatomical and aerodynamic characteristics of ENS patients, this study provides new insights into the structural factors contributing to ENS severity. These findings may help refine diagnostic criteria and guide surgical approaches to minimize ENS risk.
Background: Sickle Cell Anemia (SCA) represents a major health concern among the tribal population of India, with frequent acute pain crises significantly compromising the quality of life of the affec...
Background: Sickle Cell Anemia (SCA) represents a major health concern among the tribal population of India, with frequent acute pain crises significantly compromising the quality of life of the affected individuals. As an inherited disorder, there is no definitive cure for the condition. Hydroxyurea remains the primary therapeutic option and is typically used for lifelong management, though it is associated with several side effects. Therefore, considering the urgent need for an accessible, safe, and effective alternative for long-term management of the condition, the present study is planned to evaluate the potential of Ayurveda in managing pain crises in conjunction with conventional standard care. Objective: The study aims to evaluate the efficacy of Ayurvedic formulations in preventing acute pain crises in SCA and improving the quality of life of affected individuals. Methods: It is a randomized active, controlled, open-label clinical trial. Patients diagnosed with SCA are enrolled in the study, considering the selection criteria. The study group is administered Ayurvedic interventions such as Dadimadi Ghrita and Ayush-RP along with the standard care, while the control group receives standard care only. The intervention is given for a period of 8 months. Participants are evaluated on the 30th day, 60th day, 105th day, 150th day, 195th day and 240th day to assess changes in pain crisis frequency, haemoglobin levels, and quality of life improvements. Results: The study has been initiated on 5th September 2023. Out of the total 1518 screened participants, 1371 enrolled, 791 participants have successfully completed, 498 participants are continuing while 82 participants dropped out from the study as of 22nd January 2025. Conclusions: The study is expected to demonstrate the efficacy and safety of Ayurvedic interventions as an integrated approach in managing SCA by reduction in the frequency of pain crises, aiming to improve the overall quality of life for the patients. Clinical Trial: CTRI/2023/04/052141
Background: Menopause symptoms are common but often inadequately addressed by primary care clinicians due to limited time for discussions and resources. Mobile health applications can play a crucial r...
Background: Menopause symptoms are common but often inadequately addressed by primary care clinicians due to limited time for discussions and resources. Mobile health applications can play a crucial role in symptom identification and management, yet many existing menopause-focused apps lack evidence-based content and medical expertise. Objective: To describe the protocol study design and methodology of a randomized controlled trial (RCT) to evaluate the effectiveness of the emmii mobile app for improving menopause-related knowledge, and shared decision-making compared to a traditional menopause education pamphlet. Methods: This RCT will recruit women aged 45–55 years with upcoming primary care appointments at Mayo Clinic within 3 weeks of the date of initial outreach. Eligible participants must be English-speaking, able to provide informed consent, and report a Menopause Rating Scale (MRS) score ≥5, which indicates that they are experiencing significant menopause-related symptoms. Eligible participants will be randomized to have access to either the emmii app (intervention, n=200) or an evidence-based menopause education pamphlet (control, n=200). The emmii app is developed with direct input from primary care clinicians certified by The Menopause Society and offers symptom tracking, personalized treatment recommendations based on a protocol, and a discussion guide to support communication between patients and their primary care clinicians. Outcomes will include a post visit survey sent to the participants and their primary care clinicians within 3 days of the appointment, and assessment of patient knowledge, clinical treatment plans and both patients and clinicians experience. The study will also compare prescribing rates of hormonal and nonhormonal therapies for menopause symptoms between the emmii intervention and control groups to assess the app’s influence on treatment patterns. Data will be analyzed using descriptive statistics, including Chi-square tests, Wilcoxon rank sum tests and multivariable modeling. Results: Data collection is scheduled to begin in April 2025. Conclusions: This protocol outlines the design and methodology of a RCT that aims to assess the impact of the emmii app in facilitating menopause care through primary care clinician-patient communication and shared decision-making. Clinical Trial: NCT06919887
Background: Background:
Transitioning from preclinical-to-clinical training is a critical milestone of ‘becoming and being’ in a medical student's journey. Despite simulation-based learning, real...
Background: Background:
Transitioning from preclinical-to-clinical training is a critical milestone of ‘becoming and being’ in a medical student's journey. Despite simulation-based learning, real-world clinical exposure remains indispensable in shaping professional identity. The clinical learning environment (CLE) is a complex interplay of social, cultural, and organizational factors that influence students' development as future healthcare professionals. Objective: This study explores medical students' reflections on their first clinical placement in General Practice (GP), aiming to understand their experiences, challenges, and the CLE role in their learning and professional growth. Methods: We analysed reflections from fourth year medical students following their initial GP placement. A qualitative descriptive (QD) approach grounded in naturalism was employed to explain our participants' transitioning encounters in clear, everyday language to ensure their experiences were presented in their own words, without bias. Content thematic analysis was conducted to identify key themes related to their experiences. Results: Reflective writing offered a rich window into how students thought, felt, and acted during their GP rotations, revealing a fractured epistemological landscape. Students thought that 'knowing' from classroom would translate to 'doing' in clinic. But reality resulted in emotional overwhelm, a sense of failure and shame, and ultimately led to self-doubt, avoidance, and withdrawal. Students recounted transformative experiences that were shaped by patient interactions, moments of uncertainty, and the shift in professional roles. While emotional and cognitive challenges were common, rare encounters of supportive mentorship played a crucial role in the students’ development. Conclusions: Identity dissonance reveals why reflection is pivotal in bridging the transition from preclinical-to-clinical practice – not to be merely used as an assessment. We propose treating reflection not only as a personal insight but also as a communal insight, a developmental artifact. Shared reflection circles, narrative listening reflective sessions, and post-reflection dialogues can ease the journey of ‘becoming and being’.
Background: With the availability of newer therapies, the duration of therapy (DoT) shortens with each increasing line of treatment in patients with multiple myeloma (MM) in Japan. Objective: To ident...
Background: With the availability of newer therapies, the duration of therapy (DoT) shortens with each increasing line of treatment in patients with multiple myeloma (MM) in Japan. Objective: To identify factors that shorten DoT in MM patients using machine learning (ML) procedure from the Medical Data Vision (MDV) database. Methods: This nationwide, retrospective observational cohort real-world study was conducted using anonymized patient data from MDV claims database from 2003-2022. Patients
(≥18 years) with transplant-ineligible newly-diagnosed MM (continued 1st line [1L] therapy), or relapsed/refractory MM (continued 2L/3L therapy) were included. To identify important predictive factors, an explainable deep-learning model was created using 647 extracted variables (continuous, binary, and nominal categorical) from MDV database, and the extracted data were used to train ML algorithms to build point-wise linear (PWL) models for predicting DoT. The predictive performance of the PWL model was compared with elastic net (regularized logistic regression) and the XGBoost (boosting trees) models and calculated by area under the curve (AUC) and evaluated by 10-fold double cross-validation. A clustering analysis (k-means method) of 4,848 individual samples was performed to understand the relationship between each sample and DoT (3, 6, and 12 months). The characteristics of clusters and the features of samples belonging to each cluster during and after treatment were studied using correlation analysis. Results: Overall, 2,762 (4,848 individual samples) patients were evaluated; mean age:
69.6 years with 52.5% male. The AUC score of the PWL model to predict DoT at 3, 6, and 12 months was 0.61, 0.64, and 0.66, respectively. Based on similarity of coefficients of regression models, samples were categorized into two clusters (cluster A and cluster B) at DoT of 3 months, three clusters (cluster A, cluster B, and cluster C) at 6 months and 12 months (cluster A, cluster B, and cluster C). Cluster B vs cluster A (at 3 months) and cluster C vs cluster A and B (at 6 and 12 months) had a significantly (P<0.01) higher pre-treatment Charlson Comorbidity Index. Furthermore, they also showed lower median of prediction probability. At 3 months in cluster B and at 6 and 12 months in cluster C, the use of immunomodulatory drugs (IMiDs) in treatment for MM was significantly higher in patients who met predicted DoT at each threshold versus the ones who did not. Additionally, use of aspirin was significantly higher in cluster B and cluster C at 3 and 6 months, respectively. Conclusions: Applying ML techniques using PWL model yielded efficient results to understand trends associated with treatment and characteristics of Japanese patients with MM whose DoT were shortened. The study demonstrated that patient’s disease status and management related factors including use of IMiDs and management of thromboprophylaxis may be associated with DoT length.
Background: Stress is not just commonly discussed, but an integral part of modern life, which significantly affects mental and physical health. While significant advancements have been made in measuri...
Background: Stress is not just commonly discussed, but an integral part of modern life, which significantly affects mental and physical health. While significant advancements have been made in measuring physical fitness through wearable devices, the detection and measurement of mental stress remains in its early stages. Objective: The objective of this paper is to review recent studies of wearable-based stress detection in naturalistic settings, with a specific focus on characterizing machine learning frameworks inspired by the model card approach. Methods: This review was conducted using the PRISMA-SCR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist. A total of 319 articles were identified through searches in databases such as PubMed, MEDLINE, ScienceDirect, IEEE, ACM, and Web of Science. Studies were considered eligible if they collected data from healthy adults in naturalistic settings using wearable devices and employed machine learning models for stress detection. Results: A total of 34 articles met the eligibility criteria, including 11 conference papers, 22 journal articles, and 1 preprint published between 2017 and 2024. From these studies, we analyzed key machine learning modeling decisions such as problem formulation, ground truth determination, and machine learning algorithms. Additionally, we examined the major contributions of each study, focusing on the challenges they addressed and the solutions they proposed. Conclusions: This scoping review highlights recent trends in machine learning models for stress detection and measurement using wearable signals. It underscores the need for improved standardization in datasets, problem formulation, and reporting practices, as well as the importance of addressing critical challenges associated with data collection in real-world settings. We hope this review will support and strengthen ongoing research efforts, promote knowledge sharing, and promote collaboration among researchers—ultimately advancing the field as a community. Clinical Trial: NA
Background: The use of smartphones and interest in mobile health (mHealth) has grown in recent years with physical activity apps demonstrating potential to facilitate behaviour change. However, there...
Background: The use of smartphones and interest in mobile health (mHealth) has grown in recent years with physical activity apps demonstrating potential to facilitate behaviour change. However, there remains limited understanding of what specifically motivates children to engage meaningfully with these tools. Objective: This qualitative formative study aimed to determine children's perceptions of a bespoke physical activity app (Bestlife). It sought to explore the app’s appeal, functionality, and potential to support behaviour change among children aged 8–13. Methods: Citizen scientists (n=68) were asked to download and explore the Bestlife app 1- 2 weeks before the research session, completing a booklet to capture their and their families experiences. Thirteen focus groups were conducted across five schools to explore children's views in depth. The focus groups were designed to investigate children's perceptions of the app. Qualitative data were analysed inductively and deductively: An initial inductive analysis identified emerging themes, which were then mapped onto a framework of feasibility, usability, acceptability, and behaviour change. Results: The study identified key factors influencing the feasibility, acceptability, usability, and behaviour change potential of the Bestlife app among children. Feasibility was hindered by the parental email requirement during registration, which limited autonomy for older children. Acceptability was driven by gamified features, proportional rewards, and avatar customisation, though participants requested more personalisation to promote cultural inclusion and dynamic updates, linked to seasonal themes. Usability findings showed the interface was intuitive, with features promoting social interaction and competition enhancing engagement. However, younger users experienced navigational challenges, underscoring the need for clearer guidance. The app effectively incorporated behaviour change techniques, including goal-setting, self-monitoring, and social collaboration, but required adjustments, such as reducing the frequency of emotional tracking prompts. Conclusions: The Bestlife app shows potential as an mHealth intervention for promoting physical activity in children. Enhancing cultural representation, simplifying onboarding processes, and refining engagement strategies could strengthen both uptake and sustained use. These findings highlight the importance of integrating user feedback into the iterative design process to optimise digital health tools for young populations. Further longitudinal research is recommended to evaluate longer term engagement with the app's it’s impact on physical activity levels, and behaviour change sustainability.
Background: Long-term disease status and susceptibility to disease recurrence lead to an increasing disease burden in patients with non-Hodgkin's lymphoma (NHL). Although adverse influence of frailty...
Background: Long-term disease status and susceptibility to disease recurrence lead to an increasing disease burden in patients with non-Hodgkin's lymphoma (NHL). Although adverse influence of frailty in physical symptoms has been repeatedly reported, little attention has been paid to NHL patients and with the very limited studies, mostly are cross-sectional in nature. Our protocol provide a detailed mothods to explore the trajectory type and risk factors for frailty in NHL patients, to provide a panorama about how frailty affects NHL patients over time. Objective: The research aims to explore the frailty trajectories and influencing factors. It could offer healthcare professionals dynamic insights into frailty progression and facilitate the early identification and intervention of high-risk populations through systematic screening of contributing factors, thereby preventing the onset of frailty. Methods: This longitudinal mixed-methods study will recruit 240 patients newly diagnosed with NHL from five large public hospitals in China. Quantitative data will be collected at three time points: before chemotherapy, during the third cycle of chemotherapy, and at the end of chemotherapy. We will use validated questionnaires (i.e Tilburg Frailty Indicator) to gather information on sociodemographic data, frailty, cognition, physical condition, health literacy, anxiety and nutrition. Qualitative data will be collected via semi-structured interviews and observations at the end of chemotherapy. The growth mixture model and logistic regression analysis will be used to analyse quantitative data, and the diachronic analysis method and the directed content analysis method will be used to analysis qualitative data. Both types of data will be analyzed in parallel and separately. Finally, we will integrate the data sets to identify areas of confirmation, complementation or discordance. Results: The research protocol and informed consent form were approved by the Medical Ethics Committee of the First Affiliated Hospital of Henan University of Science and Technology (2024-03-K171). Participant recruitment began in Sep 2024. As of April 2025, the data collection for T0 (prechemotherapy) was successfully completed, with a total of 270 patients enrolled in the study. At T1 (the third cycle of chemotherapy), follow-up assessments have been conducted for 157 participants. To date, 8 patients have been lost to follow-up due to various reasons, including 4 deaths, 2 refusals to continue participation, and 2 transfers to other medical facilities. Additionally, the data collection at T2 (end of chemotherapy) has been finalized for 78 patients. Data analysis is scheduled to begin in October 2025, with the results anticipated to be published in January 2026. Conclusions: As a pilot trial, the research could offer healthcare professionals dynamic insights into frailty progression and facilitate the early identification and intervention of high-risk populations through systematic screening of contributing factors, thereby preventing the onset of frailty. Clinical Trial: ChiCTR2500097921
Background: Regular physical activity is a crucial and an important modifiable lifestyle factor reducing the risk of recurrent incidents after stroke or transient ischemic attack (TIA). Mobile Health...
Background: Regular physical activity is a crucial and an important modifiable lifestyle factor reducing the risk of recurrent incidents after stroke or transient ischemic attack (TIA). Mobile Health (mHealth) has emerged as a promising approach for providing long-term support for physical activity. However, little is known about how individuals post-stroke or TIA adhere to and engage with mHealth interventions. Objective: This study aimed to: (1) describe adherence to supervised sessions in an mHealth intervention targeting physical activity, (2) describe engagement with self-managed mHealth support for physical activity during and after the intervention, (3) compare characteristics of participants with high and low adherence and engagement, and (4) examine whether high adherence and engagement were associated with maintained physical activity after having completed the intervention and at a 12-month follow-up. Methods: In this study, a secondary analysis of data from the experimental arm of a feasibility randomized controlled trial was conducted. The experimental group received a 6-month mHealth version of the i-REBOUND program, which included supervised mHealth support for physical activity and behavior change, followed by a 6-month post-intervention period with access to self-managed mHealth support. Adherence outcomes included attendance at supervised exercise and counseling sessions, while engagement outcomes measured weekly interactions with self-managed mHealth support during and after the intervention. Participants’ level of physical activity (steps per day) was measured using accelerometers at baseline, and at 6- and 12 months post-baseline. Logistic regression analysis examined the associations between high adherence and engagement during the intervention and post-intervention period and maintained physical activity (i.e. >7000 steps/day) across the 12-month study period. Results: Of the 57 participants enrolled (67% female, average age 71), 96% had mild stroke symptoms, and 51 (89%) completed the intervention. Adherence to supervised mHealth support was high (supervised exercise sessions: 79%, counseling: 98%), while engagement with self-managed mHealth support was high during the intervention (83%) but declined post-intervention (38%). A larger proportion of females (77%) demonstrated high adherence to the intervention compared to males (23%, P = .043). High adherence (≥80%) during the intervention was associated with maintained physical activity between baseline and the 6-months follow-up (odds ratio: 5.50, P = .015), while high engagement (≥80%) during post-intervention was associated with maintained physical activity between the 6- and 12 month follow-up (odds ratio: 4.12, P = .043). Conclusions: Supervised mHealth support was well received with high adherence, while modules for self-management of physical activity faced challenges in engaging the participants. Future research should focus on co-creating self-managed mHealth support with individuals post-stroke or TIA to better understand and address their needs of support for long-term engagement in physical activity. Clinical Trial: ClinicalTrials: NCT0511195
Background: Older adults with sarcopenia often engage in therapeutic exercises to improve muscle thickness, balance confidence, activities of daily living (ADL), and quality of life (QOL). However, co...
Background: Older adults with sarcopenia often engage in therapeutic exercises to improve muscle thickness, balance confidence, activities of daily living (ADL), and quality of life (QOL). However, conventional face-to-face group exercise programs are typically standardizThis study aimed to evaluate the effects of a Mixed Reality–based Physical Therapy platform (Mr.PT) compared with Conventional Physical Activity (CPA) programs on quadriceps muscle thickness, balance confidence, independence in ADLs, and quality of life in older adults with sarcopenia.ed and may not adequately address individual needs, limiting their effectiveness. Objective: This study aimed to evaluate the effects of a Mixed Reality–based Physical Therapy platform (Mr.PT) compared with Conventional Physical Activity (CPA) programs on quadriceps muscle thickness, balance confidence, independence in ADLs, and quality of life in older adults with sarcopenia. Methods: In this preliminary randomized controlled trial, 30 older adults with sarcopenia were randomly assigned to either the Mr.PT group or the CPA group. Both groups participated in 30-minute sessions, five times per week, for four weeks. Primary outcomes included quadriceps muscle thickness. Secondary outcomes included the Activities-specific Balance Confidence (ABC) Scale, Katz Index of Independence in Activities of Daily Living (KIADL), and the 12-Item Short-Form Survey (SF-12). Outcomes were assessed at baseline and after the intervention. Analysis of variance (ANOVA) was used to assess time and group differences. Results: ANOVA demonstrated significant time effects on muscle thickness, ABC scores, KIADL scores, and SF-12 scores (p < 0.05). Post hoc analyses revealed that participants in the Mr.PT group achieved greater improvements in quadriceps This preliminary study suggests that a Mixed Reality–based physical therapy platform may offer enhanced benefits for improving muscle thickness and quality of life among older adults with sarcopenia. Further large-scale trials are warranted to confirm these findings and to optimize intervention protocols.muscle thickness and SF-12 scores compared to the CPA group. Conclusions: This preliminary study suggests that a Mixed Reality–based physical therapy platform may offer enhanced benefits for improving muscle thickness and quality of life among older adults with sarcopenia. Further large-scale trials are warranted to confirm these findings and to optimize intervention protocols. Clinical Trial: KCT0010241
Background: Helicobacter pylori commonly colonizes the mucus in the stomach and can lead to peptic ulcer disease and chronic gastritis. Toxins, such as cytotoxin-associated genes, are the primary viru...
Background: Helicobacter pylori commonly colonizes the mucus in the stomach and can lead to peptic ulcer disease and chronic gastritis. Toxins, such as cytotoxin-associated genes, are the primary virulence factors of this bacteria., Artemisia annua extract has shown a variety of biological activity on Gram-negative bacteria. Objective: Helicobacter pylori is a gram-negative, spiral-shaped microaerophilic bacterium. About half of the world's population is infected with it , which is a major source of illness and mortality and a burden on health care systems throughout the globe. A variety of antibiotics and stomach acid inhibitors are needed for the complex treatment of H. pylori eradication, which frequently results in side effects such as nausea, drug resistance, and recurrence. Because of their wide range of applications and little toxicity, natural chemicals are becoming more and more popular Methods: The in vitro effectiveness of Artemisia annua against H. pylori was examined using the broth microdilution and agar diffusion techniques. Results: The different concentrations of A. annua used in this study inhibited the growth of H. pylori more effectively than control positive ampicillin. Conclusions: These results indicate that the use of different concentrations of A. annua extract was significantly more efficient against H. pylori. Molecular technique was used to detect the nucleic acid cagA gene, which is responsible for peptic ulcers.
Background: Electronic Health Records (EHRs), including datasets like MIMIC-IV, often lack explicit links between medications and diagnoses, complicating clinical decision-making and research efforts....
Background: Electronic Health Records (EHRs), including datasets like MIMIC-IV, often lack explicit links between medications and diagnoses, complicating clinical decision-making and research efforts. Even when such links are present, diagnosis lists can be incomplete or inaccurate, particularly during early patient visits when diagnostic uncertainty is high. Discharged summaries, documented at the end of patient care, may offer more detailed explanations of patient visits, potentially aiding in inferring the most likely accurate diagnoses for prescribed medications, especially if we can exploit Large Language Models (LLMs). LLMs have shown promise in processing unstructured medical text, but systematic evaluations are necessary to determine their effectiveness in extracting meaningful medication-diagnosis relationships. Objective: This study explores the use of LLMs to predict implicitly mentioned diagnoses from clinical notes and link them to corresponding medications. We evaluate their effectiveness and investigate strategies to improve prediction performance. Specifically, we examine two research questions: (1) Does majority voting across diverse LLM configurations enhance diagnostic prediction accuracy compared to the best single-model configuration? (2) How sensitive is the diagnostic prediction accuracy of majority voting to the LLM's hyperparameters, including temperature, top-p, and clinical note summary length? Methods: A new dataset of 240 expert-annotated medication-diagnosis pairs from 20 MIMIC-IV clinical notes was created to evaluate predictive accuracy, as no such dataset previously existed. We hypothesized that combining deterministic, balanced, and exploratory configurations could enhance prediction performance. Key hyperparameters—temperature, top-p, and summary length—were systematically varied. Two levels of summarization, short and long, were tested to assess context length impact. Using GPT-3.5 Turbo, 18 configurations were generated, and random subsets of five were selected, resulting in 8,568 test cases. Majority voting was applied to select the most frequent diagnosis. Performance was evaluated using accuracy scores, comparing majority voting with the best single-model configuration and analyzing hyperparameters contributing to the highest accuracy. Results: The majority voting achieved 75% accuracy, outperforming the best single configuration (66%). No single parameter setting consistently excelled; instead, combining diverse configurations aligned with deterministic, balanced, and exploratory strategies yielded better performance. Shorter summaries (2000 tokens) generally improved accuracy. Longer summaries (4000 tokens) were effective only with deterministic settings. Conclusions: Majority voting across LLM configurations enhances diagnostic prediction accuracy in EHRs, demonstrating the potential of ensemble methods for improving medication-diagnosis associations. By leveraging diverse configurations, this approach mitigates model biases and improves robustness in predictive analytics. Future work should explore scalability with larger datasets, additional LLM architectures, and broader clinical applications to refine its effectiveness in real-world settings.
Background: Across populations, risky drinking has been demonstrated to increase HIV risk behaviors. This is of special concern for sexually minoritized cisgender men and transgender (SMMT) young adul...
Background: Across populations, risky drinking has been demonstrated to increase HIV risk behaviors. This is of special concern for sexually minoritized cisgender men and transgender (SMMT) young adults (aged 18-34), who report greater incidence of hazardous drinking (as defined by AUDIT-C criteria) and HIV compared to their heterosexual and/or cisgender peers. Objective: This study examined alcohol perceptions, patterns of use, and the role that anti-LGBTQ+ (lesbian, gay, bisexual, transgender, queer) policies and discrimination played in alcohol risk behaviors for SMMT individuals. Results were used to inform development of an alcohol reduction intervention for this population. Methods: A qualitative study was conducted with data collected via four focus groups and one in-depth interview among young adult SMMT individuals in the United States from April-June 2023 (n=22). Participants were grouped according to SMMT identity: cisgender men, transgender men, transgender women, and nonbinary individuals. Transcripts were analyzed using codebook thematic analysis. Results: Alcohol use was described as a way to navigate belonging, social connection, and identity expression within LGBTQ+ contexts. Alcohol was viewed as a mainstay of LGBTQ+ spaces, with many using it as a social lubricant and coping mechanism for LGBTQ+ related stress, as well as for relaxation and having fun. Drinking intensity was often tied to an individual’s comfort with their evolving SMMT identity, with drinking being higher in earlier stages of exploration. The consequences of drinking discussed by participants included impaired decision-making and negative effects on mental and physical health. Anti-LGBTQ+ laws and policies were seen as contributing to the further stigmatization of SMMT individuals and hazardous use of alcohol was used as a means of escape and coping. Conclusions: Alcohol use among SMMT is an important aspect of negotiating identity within different social settings and coping with stigma. Findings have valuable implications for tailoring alcohol reduction interventions for SMMT young adults as they encounter stressors in real-time.
Background: Health information exchange (HIE) supports clinical decision-making in emergency medicine settings. Despite evidence and policies that encourage adoption of HIE, usage by clinicians is lim...
Background: Health information exchange (HIE) supports clinical decision-making in emergency medicine settings. Despite evidence and policies that encourage adoption of HIE, usage by clinicians is limited. Moreover, few studies examine usage of HIE years after adoption by hospitals or clinics. Objective: To examine perceptions and usage of a mature, operational HIE system by emergency department (ED) clinicians years after its implementation. Methods: We interviewed 21 clinicians in various roles (e.g., attending physician, nurse practitioner) across multiple health systems that participate in a statewide HIE network. We asked questions about their use of the HIE system and the factors that facilitate or inhibit use. Analysis of interview transcripts was guided by a theoretical framework derived from information systems theories describing individual perception of and usage behavior towards HIE systems. Results: A total of 26 factors across 6 domains were identified by respondents. All respondents recognized the value of HIE for medical decision-making in the ED, and access to information via the HIE was preferred over traditional methods of calling other facilities or waiting for faxed records. Ease of use, particularly single sign on (SSO) functionality, was recognized as a key facilitator to routine use, enabling clinician access via a single click from their EHR directly into the patient’s HIE record. Access to integrated data and advanced search features supported clinical decision-making. Limited training and poor system usability were identified as barriers to use. Conclusions: Achieving widespread adoption and use of HIE systems globally will require a focused effort to address multiple individual perception and behavioral factors. Researchers, HIE organizational leaders, and policymakers alike should leverage these factors to achieve the goals of HIE and interoperability. Clinical Trial: N/A
Background: Vaccine hesitancy hinders the management of preventable illnesses. Currently, there are gaps in public health research on vaccine hesitancy among Muslim-Americans. Objective: We aimed to u...
Background: Vaccine hesitancy hinders the management of preventable illnesses. Currently, there are gaps in public health research on vaccine hesitancy among Muslim-Americans. Objective: We aimed to understand the extent of vaccine hesitancy among American-Muslims, and the factors for health care decision making regarding vaccination. Methods: Participants were recruited through Facebook group posts. Seventy-three participants completed the online Qualtrics survey. Sixty-three participants met the inclusion criteria. Participants’ responses were collapsed into the following belief scores: political leaning, religiosity, trust in public institutions, and vaccine hesitancy. Results: Participants who were older in age, had attained higher levels of education, employed, unmarried, and identified with the Sunni sect were less vaccine hesitant. Most participants (36.5%) were more likely to accept a vaccine if it had no reported safety issues. Participants were more likely to be hesitant about vaccines with safety concerns or poor efficacy. Conclusions: Results both align with and contradict previous studies conducted in Muslim majority and religiously heterogenous countries. This study found an association between Islamic sect and attitudes towards vaccines. Follow up studies are necessary to gauge a larger, more diverse population of Muslim-Americans. Based on this study’s findings, healthcare professionals can better promote vaccines by addressing their patient’s trust in public institutions.
Background: This paper re-imagines a world of abundance in the treatment of chronivc diseases. As Proof-of-Concept, it investigates the application of local Large Language Models (local-LLMs) based o...
Background: This paper re-imagines a world of abundance in the treatment of chronivc diseases. As Proof-of-Concept, it investigates the application of local Large Language Models (local-LLMs) based on Graph-based Retrieval-Augmented Generation (GraphRAG) for managing Gestational Diabetes Mellitus (GDM). Objective: The research thus seeks new insights into optimizing GDM treatment through a knowledge graph architecture, contributing to a deeper understanding of how artificial intelligence can extend medical expertise to underserved populations globally. Methods: The study employs an agile, prototyping approach utilizing GraphRAG to enhance knowledge graphs by integrating retrieval-based and generative artificial intelligence techniques. Training data was from academic papers published between January 2000 and May 2024 using the Semantic Scholar API and analyzed by mapping complex associations within GDM management to create a comprehensive knowledge graph architecture. Results: Empirical results indicate that the GraphRAG-based Proof of Concept outperforms open-source LLMs such as ChatGPT, Claude, and BioMistral across key evaluation metrics. Specifically, GraphRAG achieves superior accuracy with BLEU scores of 0.99, Jaccard similarity of 0.98, and BERT scores of 0.98, offering significant implications for personalized medical insights that enhance diagnostic accuracy and treatment efficacy. Conclusions: This research offers a novel perspective on applying GraphRAG-enabled LLM technologies to GDM management, providing valuable insights that extend current understanding of AI applications in healthcare. The study’s findings contribute to advancing the feasibility of GenAI for proactive GDM treatment and extending medical expertise to underserved populations globally. Clinical Trial: Not Applicable. It is categorically stated that, since the primary research objective was to establish the feasibility of a GraphRAG local-LLM PoC, no human subjects nor actual patient datasets were used.
Background: Heterogeneity in outcome selection and measurement methods has been noted in previous studies examining physical restraint minimization in adult intensive care units (ICUs). This variabili...
Background: Heterogeneity in outcome selection and measurement methods has been noted in previous studies examining physical restraint minimization in adult intensive care units (ICUs). This variability undermines meaningful evidence synthesis, including systematic reviews and meta-analyses, thereby limiting the development of evidence-based clinical approaches in minimizing physical restraint use and improve patient outcomes. Objective: This protocol outlines the methods for developing an international consensus on priority core outcomes, along with standardized measurement approaches for these outcomes in studies focused on minimizing physical restraint use in adult ICUs. Methods: We will follow the guidelines outlined in the Core Outcome Measures in Effectiveness Trials Handbook. At the outset, representatives from key stakeholder groups—including former ICU survivors/ family members, ICU clinicians, and researchers—are involved in designing this protocol to enhance its relevance and applicability. Drawing on our previous work, including a scoping review of studies on physical restraint minimization that we will update, and interviews with family members about restraint use and minimization in the ICU, we will compile a comprehensive list of potential outcomes for stakeholders to use in the two-round Delphi process. In the first round, stakeholders will rank the identified outcomes using the Grading of Recommendations Assessment, Development and Evaluations (GRADE) scale. In the second round, they will be provided with a summary of the results from Round 1 for rescoring and further refinement. A series of consensus meetings using the modified nominal group technique will be held with representatives from our stakeholder groups to finalize the core outcome set, followed by another meeting to establish standardized measurement methods for the agreed-upon outcomes. Results: We are in the process of finalizing the REB application for this protocol, which will be submitted shortly. The anticipated project start date is June/July 2025, with completion expected by October 2026. Conclusions: This study will be the first to establish both a core outcome set, and a core measurement set for minimizing physical restraint use in adult ICUs. By standardizing outcomes and measurement methods, it will enhance comparability across future research and contribute to improved patient care in ICUs Clinical Trial: This protocol is registered in the Core Outcome Measurement in Effectiveness Trials (COMET) Initiative database registration: https://cometinitiative.org/Studies/Details/3368
Background: The burden of paralytic ileus (PI) in the intensive care unit (ICU) remains high, and the Charlson Comorbidity Index (CCI) is strongly associated with the prognosis of several acute and ch...
Background: The burden of paralytic ileus (PI) in the intensive care unit (ICU) remains high, and the Charlson Comorbidity Index (CCI) is strongly associated with the prognosis of several acute and chronic diseases. However, there is no literature on the clinical value of CCI as a prognostic assessment tool for critically ill patients with PI in the ICU. Objective: The aim of this study was to investigate the relationship between CCI and clinical prognosis in critically ill patients with PI. Methods: In this study, data from the Critical Care Medical Information Marketplace IV 2.2 database were used to determine the optimal cutoff value of CCI for predicting mortality in patients with PI using receiver operating characteristic (ROC) curves, and the relationship between CCI and mortality was evaluated using Cox regression and restricted cubic spline analysis. A machine learning (ML) prediction model was then constructed to predict hospital mortality by combining CCI and other clinical characteristics. Results: The study included 863 patients with PI (median age 65.4 years [interquartile range 54.6-75.5 years], 66.6% male). The ROC curve identified an optimal cut-off value of 4.5 for CCI. Multivariate Cox regression analysis showed that compared to the lowest CCI quartile, patients with elevated CCI levels were more likely to have elevated hospital (Q4: HR 2.447, 95% CI 1.210-4.951), 28-day (Q4: HR 3. 891, 95% CI 1.956-7.740) and 90-day (Q4: HR 3.994, 95% CI 2.224-7.173) all-cause mortality were significantly associated with elevated CCI levels; however, the association with ICU mortality (Q4: HR 1.892, 95% CI 0.653-5.480) was weak. Among the 11 ML models, the LightGBM model performed best, with internal validation results showing an area under the curve of 0.811, a G-mean of 0.670, and an F1 score of 0.895. Conclusions: The CCI is an important predictor of hospital, 28-day, and 90-day all-cause mortality in critically ill patients with PI, and the optimal threshold is 4.5. ML models including the CCI show high accuracy in predicting hospital mortality, and the CCI occupies an important position in the model. This suggests that the CCI helps to identify high-risk patients, supports clinical decision making, and improves prognosis. Clinical Trial: NO
Background: Mobile health (mHealth) technologies, including smartphone health apps and wearable trackers, are increasingly used to promote health behaviors. However, their impact on physical and menta...
Background: Mobile health (mHealth) technologies, including smartphone health apps and wearable trackers, are increasingly used to promote health behaviors. However, their impact on physical and mental well-being remains complex, with both benefits and potential unintended negative consequences. Objective: This study aimed to examine the relationship between mHealth use (i.e., health app, wearable tracker) and two health outcomes (body mass index (BMI) and emotional distress), as well as the mediating roles of healthy eating, sleep, and physical activity based on a representative sample. Methods: We analyzed data from a nationally representative sample of U.S. adults aged 33–43 (N = 1,931). Chi-square tests and one-way ANOVA were used to compare demographic differences between mHealth users and non-users. A path model examined the relationship between mHealth use (i.e., smartphone health apps, wearable trackers) and health outcomes (i.e., BMI, emotional distress), with lifestyle factors (i.e., healthy eating, physical activity, sleep) as mediators. Mediation analyses tested indirect effects through these lifestyle factors. Results: mHealth users are more likely to be female, married, have higher levels of education and income, and have health insurance. The primary use of mHealth is the management of physical activity. The use of health apps positively correlates with the use of wearable trackers (β = .408, p < .001). Surprisingly, health app use predicts greater BMI (β = .058, p = .019). However, the use of health apps, as well as wearable trackers, predicts more healthy eating (βhealth_app = .097, p < .001; βwearable = .081, p < .001) and physical activity (βhealth_app = .125, p < .001; βwearable = .105, p < .001), both of which link to lower BMI (βhealthy_eating = -.075, p =.001; βphysical_activity = -.147, p < .001). For emotional distress, wearable tracker use directly predicts lower emotional distress (β = -.089, p < .001); a path also mediated by healthy eating (β = -.120, p <.001) and physical activity (β = -.077, p = .001). Although health app use does not predict emotional distress directly, the mediated path via healthy eating and physical activity remains significant. Notably, the use of wearable trackers, not that of health apps, connects with reduced sleep hours (β = -.077, p = .001), which in turn correlates with higher BMI (β = -.109, p < .001) and greater emotional distress (β = -.137, p < .001). Conclusions: mHealth technologies can promote healthier behaviors, but their impact depends on users taking the initiative toward sustained lifestyle changes. While wearable trackers may aid in mental well-being, their association with reduced sleep warrants further investigation.
Background: Social media platforms are increasingly used for both sharing and seeking of health-related information online. Especially TikTok has become one of the most widely used social networking p...
Background: Social media platforms are increasingly used for both sharing and seeking of health-related information online. Especially TikTok has become one of the most widely used social networking platforms over the last few years. One health-related topic trending on TikTok recently is Attention Deficit/Hyperactivity Disorder (ADHD). However, the accuracy of health-related information on TikTok remains a significant concern. Misleading information on ADHD on TikTok can increase stigmatization and lead to false “self-diagnosis”, pathologizing normal behavior and overuse of care. Objective: This study aims at investigating the occurrence of misleading information in TikTok videos about ADHD and at exploring the amount of potential self-diagnosis among viewers based on an in-depths analysis of the video comments. Methods: We scraped data from the 124 most liked ADHD-related TikTok videos uploaded between March 2022 and November 2023 using a commercial scraping software. We categorised videos based on the usefulness of their content as "misleading", "personal experience" or "useful" and used the Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V) to evaluate the video quality regarding understandability and actionability.
By purposive sampling we selected six videos and analyzed the content of 100 randomly selected user comments per video to understand the extent of self-identification with ADHD-behaviour among the viewers.
All qualitative analyses were carried out independently by at least two authors, disagreement was resolved by discussion. Using SPSS 27, we calculated the interrater reliability between the raters and descriptive statistics for video and creator characteristics. We used one-way ANOVA to compare the usefulness of the videos. Results: We assessed 51% of the videos as misleading, 30% as personal experience, and 19% as useful. The PEMAT-A/V scores for understandability and actionability are 79.5% and 5.1%, respectively, with the highest scores observed for useful videos (92.3% for understandability, 8.3% for actionability).
Viewers resonated with ADHD-related behaviours depicted in the videos in 36.7% and with ADHD in 5.3% of the comments. The self-attribution of behavioural patterns varied significantly, depending on the usefulness of the videos, with personal experience videos showing the most comments on self-attribution of behavioural patterns (102/600, 17% of comments, P<.001). For the self-attribution of ADHD, we found no significant difference depending on the usefulness of the videos (P=.359). Conclusions: A high proportion of ADHD-related TikTok videos are misleading and a high percentage of viewers seem to self-identify with the symptoms and behaviours presented. Self-identification is most common in videos on personal experiences, but also occurs in misleading videos, potentially increasing misdiagnosis. This highlights the need to critically evaluate health information on social media and for healthcare professionals to address misconceptions arising from these platforms.
Background: Virtual reality (VR) is increasingly applied in rehabilitation training. Flow experience, a critical factor for enhancing user engagement and training efficacy, exhibits age-related differ...
Background: Virtual reality (VR) is increasingly applied in rehabilitation training. Flow experience, a critical factor for enhancing user engagement and training efficacy, exhibits age-related differences that are essential for designing elderly-friendly rehabilitation tasks. However, current VR rehabilitation systems often overlook age-related subjective experience disparities, leading to insufficient engagement among older adults. Objective: This study aims to explore differences in flow experience between younger and older adults during identical VR rehabilitation tasks and provide empirical evidence for designing personalized elderly rehabilitation programs. Methods: We recruited 21 older adults (mean age: 63.00 ± 6.64 years, 10 males) and 19 younger adults (mean age: 24.68 ± 1.16 years, 9 males). Participants performed the "Space Pop" task in Kinect Adventures (simulating limb coordination training) using VR. Flow experience was measured using the Chinese version of the Flow State Scale-2 (CFSS-2). Group differences were analyzed via Wilcoxon rank-sum tests. Results: Older adults exhibited significantly lower overall flow experience than younger adults (p < 0.001, d = 1.45), with significant differences in the dimensions of "challenge-skill balance" (p < 0.001), "clear goals" (p = 0.044), "sense of control" (p < 0.001), and "loss of self-consciousness" (p = 0.048). Other dimensions (e.g., concentration, time transformation) showed no statistical differences. Conclusions: Age significantly impacts flow experience in VR rehabilitation tasks. Tailoring designs through dynamic difficulty adjustment, intuitive goal cues, and reduced motor demands can enhance older adults’ control, immersion, and active participation, thereby improving health outcomes.
Background: Patient safety remains a global priority, with preventable adverse events—often caused by communication failures among healthcare professionals—posing serious risks. Interprofessional...
Background: Patient safety remains a global priority, with preventable adverse events—often caused by communication failures among healthcare professionals—posing serious risks. Interprofessional education (IPE) is a promising strategy to improve collaboration and communication, thereby enhancing care quality and patient outcomes. While IPE has been widely studied in student populations, limited evidence exists regarding its implementation and effectiveness for licensed rehabilitation professionals such as physical therapists (PTs), occupational therapists (OTs), and speech-language pathologists (SLPs). Objective: This scoping review aimed to comprehensively map the implementation, content, and effects of interprofessional education (IPE) targeting groups including licensed physical therapists (PTs), occupational therapists (OTs), and speech-language pathologists (SLPs). Methods: This scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping review guidelines. Searches were performed using PubMed, Web of Science, CINAHL, MEDLINE, and ERIC databases, targeting studies published up to March 2024. The study population consisted of licensed PTs, OTs, and SLPs. Regarding concept, we targeted studies in which IPE was provided to groups with at least one licensed PT, OT, or SLP. Regarding context, we included studies reporting the effects of IPE in clinical settings. Controlled vocabulary (e.g., MeSH) for terms such as IPE, PT, OT, and SLP was used to develop the search strategy. Eight reviewers extracted data and identified eligible studies. Results: Of the 3,389 records identified, eight were included. Mapping revealed that IPE implementation primarily involved lectures, discussions, and team-based practices. The content covered theories and concepts, treatment, and workplace problem-solving. Regarding effects, the results demonstrated that IPE improved role understanding, collaboration skills, knowledge, and confidence in the long term. However, simulation training did not improve interprofessional attitudes or network expansion. Conclusions: IPE targeting licensed PTs, OTs, and SLPs was structured in a way that combined multiple implementation methods to enable comprehensive learning, with the content adjusted to meet participant needs. Future studies should consider systematic reviews and meta-analyses to identify recommended combinations of IPE implementation and content. Clinical Trial: Not applicable.
Background: Short video platforms have become important channels for psoriasis-related health information dissemination, yet their content quality remains understudied. Objective: This study aimed to...
Background: Short video platforms have become important channels for psoriasis-related health information dissemination, yet their content quality remains understudied. Objective: This study aimed to assess the quality and content of psoriasis-related videos on Bilibili and TikTok. Methods: The top 100 relevant videos on each of the two platforms were retrieved in February 2025, and the video features were recorded after screening, and the content and quality were assessed using modified DISCERN (mDISCERN), Video Information and Quality Index (VIQI), Global Quality Score (GQS), and Journal of the American Medical Association (JAMA). Results: A total of 173 psoriasis-related videos from Bilibili (n=85) and TikTok (n=88) were included in this study. The median video length was 447 seconds (Bilibili) and 55 seconds (TikTok). In both platforms, Treatment was the most popular video topic, doctor monologue was the most common presentation format, and Bilibili demonstrated a broader range of topics and more diverse presentation formats. Video uploaders were mainly self-media (Bilibili) and doctors (TikTok), with TikTok authors exhibiting the highest certification rate (89.47%). Videos from professional and certified uploaders showed superior quality according to video quality assessed by mDISCERN, GQS, VIQI, and JAMA tools. Spearman correlation analysis showed no significant correlation between video quality and viewer interaction. Conclusions: The number of psoriasis-related videos on both platforms is large, but the quality of both needs to be improved. It is recommended that cross-platform collaborative optimization be implemented to enhance the content output of professional and certified creators and to strengthen the scientific rigor and accessibility of the psoriasis information ecosystem.
Background: Wearable sensor technologies, such as inertial measurement units (IMUs), smartwatches, and multi-sensor systems, have emerged as valuable tools in clinical and real-world health monitoring...
Background: Wearable sensor technologies, such as inertial measurement units (IMUs), smartwatches, and multi-sensor systems, have emerged as valuable tools in clinical and real-world health monitoring. These devices allow continuous, non-invasive tracking of gait, mobility, and functional health across a variety of populations. However, significant challenges remain, including variability in sensor placement, data processing methodologies, and insufficient validation in real-world settings. Objective: This systematic review aims to evaluate recent literature on the clinical and research applications of wearable sensors. Specifically, it investigates how these technologies are used to assess mobility, predict disease risk, and support rehabilitation. It also identifies limitations and proposes future research directions. Methods: The review was conducted according to PRISMA guidelines. A comprehensive search of PubMed, Scopus, and Web of Science databases was performed for studies published in the past ten years. Inclusion criteria focused on studies using wearable sensors in clinical or real-world environments. A total of 30 eligible studies were identified for qualitative synthesis. Data extracted included study design, population characteristics, sensor type and placement, machine learning algorithms, and clinical outcomes. Results: Among the reviewed studies, observational designs were the most common (43.3%), followed by experimental studies (26.7%) and randomized controlled trials (10%). IMU-based sensors were used in 66.7% of studies, with wrist-worn devices being the most common placement (43.3%). Machine learning techniques were frequently applied, with random forest (20%) and deep learning (16.7%) models predominating. Clinical applications spanned Parkinson’s disease, stroke, multiple sclerosis, and frailty, with several studies reporting high predictive accuracy for fall risk and mobility decline (AUROC up to 0.919, p < 0.05). Conclusions: Wearable sensors demonstrate strong potential for enhancing mobility monitoring, disease risk assessment, and rehabilitation tracking in both clinical and real-world settings. However, challenges remain in standardizing sensor protocols and data analysis. Future research should focus on large-scale, longitudinal studies, harmonized machine learning pipelines, and integration with cloud-based health systems to improve scalability and clinical translation.
Background: Implementing new technologies in healthcare settings is often a complex and challenging process. Virtual reality (VR) has demonstrated promising results in terms of feasibility, acceptabil...
Background: Implementing new technologies in healthcare settings is often a complex and challenging process. Virtual reality (VR) has demonstrated promising results in terms of feasibility, acceptability, and effectiveness across various health conditions. However, little research has been done on patients’ acceptance of VR technology in psychiatric care. Objective: This study aimed to explore patients’ experiences of being offered the use of a virtual calm room when feeling anxious or worried in a psychiatric inpatient setting. Methods: A mixed-methods design was employed, with a qualitative → quantitative (QUAL → QUAN) approach. Data were gathered through individual interviews (n = 10) and a three-item rating scale (n = 59). The qualitative findings were then validated within a larger population using the quantitative data. Results: The majority of participants reported being satisfied with the option of using VR. Their initial impressions of the virtual calm room were that it seemed like a creative and stimulating environment that could potentially have a positive impact on them. They expected the VR experience to enhance their feelings of relaxation and concentration. The participants highlighted human interaction as a particularly valuable aspect to consider when implementing VR, emphasizing its role in enhancing the overall experience and ensuring a sense of connection and support throughout the process. Participants reported no significant difficulties in using the VR technology. They expressed high willingness to use the virtual calm room again in future and viewed the method as modern and innovative. Conclusions: The qualitative findings highlighted patients’ openness to innovative methods for enhancing their engagement in the psychiatric inpatient setting. Patients expressed a desire for increased availability of the virtual calm room. However, maintaining a balance between innovative technologies and human support is crucial for the successful implementation of such methods. Quantitative results demonstrated high acceptance of the option of using the virtual calm room, with no significant difficulties reported.
Background: Mental health conditions account for significant distress, burden, and societal costs. Despite efforts to implement evidence-based practices, access to high quality mental health treatment...
Background: Mental health conditions account for significant distress, burden, and societal costs. Despite efforts to implement evidence-based practices, access to high quality mental health treatment in general practice remains limited, and clinical outcomes sub-optimal. Measurement-based care (MBC) is a transtheoretical and transdiagnostic strategy that has the potential, when implemented effectively, to improve the quality of care. Digital tools can also support clinicians by alleviating administrative tasks and providing in-the-moment performance data and clinical decision support. In this study, we examine the clinical outcomes of a technology-enabled psychotherapy practice, where clinicians are supported by a suite of innovations including an MBC platform, clinical decision support tools, and tools designed to alleviate administrative burden. Objective: The current study examines client retention and depression and anxiety outcomes within a technology enabled psychotherapy practice. Methods: This retrospective cohort study examines 2,984 adults who initiated mental health treatment with Two Chairs, a hybrid technology enabled behavioral health provider, between January 1 to June 30, 2024. Rates of reliable change, recovery, remission, and magnitude and trajectory of symptom change in depression and anxiety symptoms were assessed using the PHQ-9 and GAD-7. Results: The population demonstrated high rates of retention in care (89.9%), as well as high rates of MBC survey completion (96.3%). From baseline to the 12th session, patients showed significant symptom improvements in depression and anxiety, achieving high rates of reliable improvement (65.8%) and recovery (53.2%). Aggregate clinical outcomes continued to improve up to the point of termination. Pre to post-treatment effect sizes were large (all Cohen’s d’s > 0.9). Conclusions: This study demonstrates how technology-enabled measurement-based care and clinical decision support systems may drive high quality patient outcomes in mental health. Implications for healthcare costs and value-based payment models are discussed.
Background: Functional cognitive disorder (FCD) is a common and disabling condition for which accessible and evidence-based treatments are urgently needed. Objective: We describe the planning and deve...
Background: Functional cognitive disorder (FCD) is a common and disabling condition for which accessible and evidence-based treatments are urgently needed. Objective: We describe the planning and development stages of a new self-help mobile app intervention for FCD. Methods: The UK Medical Research Council's Complex Interventions framework was followed. Theory- and user-centered approaches were adopted to develop Mementum – a 6-week programme rooted on cognitive behavioural techniques and complementary principles (mindfulness, education, cognitive rehabilitation). Results: A scoping review is presented. Thematic analysis of patient interviews identified 6 themes and 13 subthemes, including user needs, management strategies, opportunity and motivation. The treatment model places attentional dysregulation as the core symptom generator, around which treatment content was developed. Provisional insights from a focus group suggested the intervention is acceptable and credible. Facilitators and barriers were uncovered and addressed. The programme includes 7 modules, videos, patient stories, FAQs, and signposting to existing resources. Interactive features include a weekly memory diary and symptom checklist, tailoring of contents to symptoms, and homework tasks. Conclusions: We used a systematic approach to develop a novel digital health intervention for FCD. Collaborations with key stakeholders enabled intervention development and optimisation. Feasibility, acceptability, and efficacy testing of this intervention are underway. Clinical Trial: N/A
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance,...
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance, patient-centered care, and rigorous evaluation.
Institutional leaders frequently navigate multiple professional identities; simultaneously serving as educators, researchers, clinicians, and innovators; creating bridges between academic rigor and practical application that accelerate the translation of research into meaningful solutions. Institutions and organizations may also need to broaden their identities.
The contemporary landscape presents significant challenges as institutions balance the pursuit of academic excellence with the need for rapid responsiveness to technological and commercial innovation. Traditional research processes, while ensuring quality, often impede the pace of advancement necessary in today's rapidly evolving environment. This tension necessitates structural reforms across multiple dimensions of institutional operation.
To cultivate a thriving research and innovation ecosystem, several essential components must be established:First, institutions require agile research infrastructure with cutting-edge laboratories and collaboration spaces, specialized equipment, and certified research professionals specifically trained in device development and regulatory compliance. Robust clinical management platforms can expedite trials and streamline data extraction for publication and dissemination. Objective: The Orange County (OC) Impact Conference, held in November 2024, convened 180 key stakeholders from the life sciences, technology, medical device, and healthcare sectors. CHOC Research in collaboration with University Lab Partners (ULP) and the University of California, Irvine, provided this platform for leaders, decision-makers, and experts to discuss the intersection of innovation in research, healthcare, biotechnology, and data science. Methods: We convened a multidisciplinary symposium (180 participants) to examine advancements in life sciences and medical device research development. The structured forum incorporated moderated panel discussions and a keynote speaker. Participants represented diverse stakeholder categories including research scientists, clinicians, investors and financiers, and executive research and healthcare leadership. The event design facilitated both structured knowledge exchange and strategic networking opportunities aimed at identifying implementation pathways to enhance clinical impact. Results: The 2024 OC Impact Conference Proceedings outline a strategy for healthcare innovation, demonstrating how targeted collaboration between patients, families, researchers, clinicians, engineers, data scientists, and industry is reshaping the healthcare innovation ecosystem. This integrated approach ensures every stakeholder's voice contributes to meaningful advancement, guiding resource allocation and partnership development across the life science and medical device sectors. Our findings demonstrate that success requires moving beyond traditional approaches to patient-driven research priorities, augmented design principles for medical device development, and direct engagement between innovators, research participants, industry and healthcare centers throughout the research development cycle. Conclusions: The insights gained through participation in the OC Impact Conference contribute to the ongoing discourse in these fields, emphasizing collaborative efforts to enhance pediatric and adult healthcare outcomes. Clinical Trial: N/A
Background: Acromegaly is an endocrine disease that often leads to delayed diagnosis due to its insidious onset. It is a rare disorder caused by pituitary adenomas, which results in excessive secretio...
Background: Acromegaly is an endocrine disease that often leads to delayed diagnosis due to its insidious onset. It is a rare disorder caused by pituitary adenomas, which results in excessive secretion of growth hormone. This triggers abnormal growth of soft tissues, bones, and cartilage, altering the appearance and limbs of patients. It not only affects the appearance and quality of life but also has a negative impact on mental health. In addition, acromegaly can cause health problems such as compression symptoms related to pituitary tumors, diabetes, hypertension, cardiovascular and cerebrovascular risks, respiratory diseases, and colorectal cancer, threatening health and survival. Objective: The aim of this study is to combine the morphological findings of the disease's facial features with deep learning methods to perform rapid recognition of acromegaly through natural images of the face. By leveraging lightweight mobile devices such as smartphones, the approach is designed to both assist patients in early self-surveillance and enhance clinicians' capabilities in improving early detection rates. Methods: A hybrid training approach using natural photographs of the face and computed tomography (CT) 3D facial reconstruction data was used to include 53 clinical acromegaly patients (47 natural pictures, 6 CT 3D reconstruction models, 24 females and 29 males) and 55 healthy controls (45 natural pictures, 10 CT 3D reconstruction models, 25 females and 30 males). The CT data was added to enhance the sample size of the dataset and the depth of facial feature information; further, a framework called Facial Attention ResNet (FARNet) was designed and implemented, combining the ResNet50 architecture and the attention masking mechanism. Focusing on key regions of facial disorders to improve the accuracy and robustness of classification training. Results: Comparative classification training experiments are conducted by different deep convolutional neural network (DCNN) architectures. The results show that the accuracy of FARNet proposed in this project reaches 95.82%, which is significantly better than traditional DCNN deep learning models such as ResNet34, ResNet50, VGG16, DenseNet121 & InceptionV3. Compared with human visual diagnosis, on the test set, the FARNet trained in this paper combined with CT hybrid model has a recognition accuracy of 94.44% for natural pictures of faces, which is higher than the highest accuracy of endocrinologists (88.89%).Compared with the existing literature on face prediction for acromegaly, this paper's method achieved the best performance for East Asian face shapes using fewer samples. Conclusions: Existing acromegaly facial disease recognition algorithms rely on natural facial photos, facing issues like limited facial disease feature details, scarce data, and patient psychological barriers. By introducing the attention mechanism and CT hybrid data training to expand data samples and facial details, the performance of the acromegaly facial recognition model is significantly enhanced. This study not only provides an effective auxiliary tool for the rapid diagnosis of acromegaly but also offers an important early warning indicator for the screening of pituitary growth hormone adenomas. Additionally, utilizing CT imaging data accumulated by hospitals over the years to address the challenge of collecting high-quality facial imaging data from patients provides new insights for other medical applications requiring precise facial recognition. Future research will explore the feasibility of facial model training using fully CT data to further ameliorate the problems of limited depth of detail, limited data volume and patient privacy concerns associated with traditional natural facial photographs. Based on this, mobile - device - based facial photo - recognition applications will be developed, enabling the technology's wider use in clinical and daily scenarios.
Background: Telehealth is a promising approach to managing chronic conditions like type 2 diabetes mellitus (T2DM), providing more access and convenience. Asian Americans, who are 40% more likely to b...
Background: Telehealth is a promising approach to managing chronic conditions like type 2 diabetes mellitus (T2DM), providing more access and convenience. Asian Americans, who are 40% more likely to be diagnosed with diabetes than non-Hispanic Whites, remain underrepresented in telehealth utilization.
While general barriers such as digital literacy and provider engagement have been documented, few studies focus on the cultural and technological challenges specific to Asian American communities. Language barriers, limited access to technology, and a preference for in-person care may also impact adoption.
This study addresses these gaps by examining the barriers to telehealth use among Asian Americans with T2DM, using the Unified Theory of Acceptance and Use of Technology (UTAUT) to explore how social, cultural, and technological factors influence adoption. Objective: This study examines cultural and technological barriers affecting telehealth adoption among Asian Americans with type 2 diabetes mellitus (T2DM). Methods: A qualitative case study approach was employed, utilizing semi-structured interviews with Asian American individuals in Missouri. Thematic analysis was used to identify key barriers. Results: Four major barriers emerged: (1) Language and cultural barriers—limited availability of translated materials and interpreters; (2) Digital literacy and access—older adults and individuals with low technological exposure struggled with telehealth platforms; (3) Limited provider recommendations—healthcare providers did not actively endorse telehealth, reducing patient awareness; and (4) Technology and infrastructure disparities—low-income participants faced challenges with access to broadband and telehealth-compatible devices. Conclusions: Addressing cultural and technological barriers is crucial to increasing telehealth adoption among Asian Americans with T2DM. Culturally tailored interventions, provider engagement, and digital literacy programs should be prioritized. Policy efforts must focus on expanding broadband access and providing multilingual telehealth resources.
Background: Background/Objectives: Excessive infant crying affects approximately 20% of families, often resulting in parental distress, anxiety, and strained relationships. Despite its prevalence, man...
Background: Background/Objectives: Excessive infant crying affects approximately 20% of families, often resulting in parental distress, anxiety, and strained relationships. Despite its prevalence, many parents, particularly mothers, report feeling misunderstood and unsupported. Objective: This study aimed to examine mothers’ perceptions of understanding and support from partners, their social environment, and healthcare professionals. Methods: Methods: A mixed-methods design was employed, integrating both quantitative and qualitative data. A total of 432 mothers participated in the study. Quantitative assessments compared perceived levels of understanding and support from three groups: partners, the social environment, and healthcare professionals. Qualitative data explored specific forms of support mothers found meaningful. Results: Results: Quantitative findings indicated that healthcare professionals were rated lowest in both understanding and support, with 50.6% of mothers reporting little or no understanding, and 47.1% reporting little or no support. In contrast, partners were perceived as the most supportive and understanding group. Qualitative analysis highlighted essential support forms, including emotional reassurance, avoiding maternal blame, practical assistance, and open communication. Conclusions: Conclusions: The study reveals a gap in perceived support from healthcare professionals and emphasizes the vital role of partners and the social environment. Based on qualitative insights, a Maternal Support Framework is proposed to guide holistic, family-centered interventions, with the goal of enhancing parental and infant well-being.
Background: Isolated premature thelarche (IPT) is characterized by early breast development in girls under 8 years old without other signs of puberty. Zhibai Dihuang Ointment Prescription, a tradition...
Background: Isolated premature thelarche (IPT) is characterized by early breast development in girls under 8 years old without other signs of puberty. Zhibai Dihuang Ointment Prescription, a traditional Chinese medicine (TCM) formulation, has been proposed as an alternative. Objective: This study aims to explore parents' perceptions of using this intervention for treating IPT in their children. Methods: Semi-structured individual interviews were conducted online with 14 parents of children diagnosed with IPT who had been treated with Zhibai Dihuang Ointment Prescription for over six months. Participants were recruited through purposive sampling. Interviews were audio-recorded, transcribed verbatim, and analyzed using template analysis. NVivo 12 software was used to facilitate data. Results: Three main themes emerged: (1) facilitators of Zhibai Dihuang Ointment Prescription for IPT, (2) barriers of Zhibai Dihuang Ointment Prescription for IPT, and (3) parental demands on Zhibai Dihuang Ointment Prescription for IPT. Facilitators included: (a) positive impacts on children and parents and (b) good acceptance among children and parents. Barriers included (a) limitations of the use of Zhibai Dihuang Ointment Prescription and (b) limitations in medical resources. Parental demands focused on (a) improvements in medication experience and (b) improvements in hospitals’ medical services. Conclusions: Zhibai Dihuang Ointment Prescription positively impacts children's development and family well-being. However, challenges like bitter taste, long treatment periods, and occasional side effects affect adherence. Improved healthcare access and patient-centered approaches are needed. Future quantitative and qualitative research are needed to evaluate its effects and understand patient experiences.
Background: Mobile health (mHealth) technologies show promise in addressing suboptimal anticoagulation adherence among venous thromboembolism (VTE) patients Objective: To evaluate the impact of a mobi...
Background: Mobile health (mHealth) technologies show promise in addressing suboptimal anticoagulation adherence among venous thromboembolism (VTE) patients Objective: To evaluate the impact of a mobile VTE application (mVTEA) on thromboprophylaxis adherence in VTE or moderate-to-high-risk patients. Methods: This single-center pilot study enrolled 88 patients at the Chinese PLA General Hospital (August–December 2023). Participants used mVTEA for automated medication reminders and self-management. Adherence was assessed using the Morisky Medication Adherence Scale-8 (MMAS-8) and Beliefs about Medicines Questionnaire-Specific (BMQ-Specific). Real-time adherence data were analyzed at 1 month (Trial registration: ChiCTR2200063206). Results: Among 45 completers (age 60.8±15.2 years; 35.6% female), baseline adherence was suboptimal (good: 28.9%; moderate/poor: 71.1%). Primary non-adherence drivers included forgetfulness (Q2: 0.69±0.47) and premature discontinuation (Q6: 0.78±0.42). BMQ-Specific revealed higher necessity than concern scores (17.58±3.12 vs. 14.58±3.34, p<0.001). At 1-month follow-up, 100% achieved perfect adherence, with 80% completing mVTEA check-ins. Patients utilizing check-in features demonstrated superior necessity-concern differentials (NCD>0: 80.6% vs. 0%, p<0.001). No adverse events occurred. Conclusions: mVTEA significantly improved short-term anticoagulation adherence through behavioral nudges and real-time monitoring. Individualized patient education may further optimize outcomes. Clinical Trial: ChiCTR2200063206
Background: Febrile seizures, although typically benign, can cause significant emotional distress for parents. Their diverse etiological risk factors underscore the need for further research. Ecologic...
Background: Febrile seizures, although typically benign, can cause significant emotional distress for parents. Their diverse etiological risk factors underscore the need for further research. Ecological Momentary Assessment (EMA) offers a cost-effective and timely method for real-time data collection. The FeverApp, an EMA-based registry for fever management, enables parents to document febrile seizures as they occur. Objective: This study systematically evaluates febrile seizure records from the FeverApp registry to assess their characteristics and explore the clinical implications of the findings. By providing real-world data on seizure management, this research demonstrates the potential of app-based EMA in pediatric care. Additionally, it offers insights for targeted interventions and improved febrile seizure management. Methods: Parents descriptions of 226 seizures belonging to 161 children were qualitatively analysed. Group differences in quantitative data were assessed through matched-pair sampling, comparing 114 children. Statistical methods were tailored to the nature of the respective variables, which included prevalence, age, gender, health and febrile history, fever management, temperature, well-being, and parental confidence. Results: Qualitative analyses provided detailed descriptions of seizure symptoms, seizure duration, and seizure management practices. Additionally, the data revealed a high rate of emergency consultations related to febrile seizures. However, there was underreporting of febrile seizures within the FeverApp, with a reported incidence of only 0.4% among febrile children. In a matched sample controlled for gender and age, significant differences were observed between febrile children with and without febrile seizures in several parameters, including maximum recorded temperature (P < .001), prevalence of chronic diseases (P = 0.004), parental confidence (P = 0.014), and frequency of emergency consultations (P < .001). Conclusions: This study offers valuable insights into the characteristics, temporal dynamics, management strategies, and parental responses to febrile seizures in children. Despite the limitation of potential underreporting in an EMA-based registry, the findings highlight the critical importance of parental education and support in managing febrile seizures. Enhancing these areas has the potential to reduce unnecessary medical consultations and improve the overall care of affected children. Furthermore, integrating improvements in the FeverApp's education and documentation system regarding febrile seizures could facilitate better management and support future research efforts. Clinical Trial: DRKS00016591
Background: The impact of Pass/Fail or Tiered Grade assessment for exams in undergraduate medical education causes much debate while there is little data to inform decision making. The increasing numb...
Background: The impact of Pass/Fail or Tiered Grade assessment for exams in undergraduate medical education causes much debate while there is little data to inform decision making. The increasing number of medical schools transitioned to Pass/Fail assessment has raised a concern about medical students’ academic performance. In 2018, the undergraduate medical curriculum reform at the Faculty of Medicine, Aalborg University changed some exams from Pass/Fail to Tiered Grade and vice versa for other exams. These changes provide an opportunity to evaluate the different assessment forms. Objective: To evaluate medical students’ academic performance at the final licensing exam in relation to exam grading principle. Methods: This single-centre cohort-study at the Aalborg University Medical School, North Denmark Region assess the change from 2-digit Tiered Grade to Pass/Fail evaluation and vice versa of undergrade medical students’ exams after the 4th and 5th year clinical training modules from Autumn 2015 through Spring 2023. The primary outcome was the number of students failing clinical training exams, and the final licensing exam grades. Results: Among the total of 7,634 exams, 7,164 4th and 5th year clinical training exams were included in the comparisons of which 3,047 (42.5%) were Pass/Fail exams and 4,117 (57.5%) were Tiered Grade exams. The frequency of students failing exams was 3,3% (n=101/3,047) at Pass/Fail and 1.97% (81/4,117) with Tiered Grade exams (p<0.001). This difference was levelled out when counting the near failure tiered grade as fail. Tiered Grade exams did not differ between semesters (p=0.99) nor show a time trend at the 4th year (p=0.66). The final licensing exam grades were unaltered (p=0.47). Conclusions: Despite our expectation, Pass/Fail exams exhibit a higher fail rate compared to Tiered Grade exams without lowering the final academic performance. These results suggest that a shift from tiered grading to Pass/Fail assessment redirects the focus from rewarding high performance to ensuring standards are maintained among underperforming students.
Background: Parents of premature infants often face challenges in transitioning from hospital to home, requiring reliable and accessible information to support their caregiving. In the Netherlands, a...
Background: Parents of premature infants often face challenges in transitioning from hospital to home, requiring reliable and accessible information to support their caregiving. In the Netherlands, a post-discharge, responsive parenting intervention (TOP program) is standard care for very preterm born infants and their parents. However, parents still indicate unmet information needs. Mobile health (mHealth) interventions have the potential to supplement post-discharge education and empower parents by providing tailored, evidence-based information. Objective: The primary objectives of this study were: (1) to develop an information app (e-TOP) for parents of preterm infants and (2) to evaluate its usability. Methods: An exploratory two-phase mixed-method design was employed. In Phase 1, the app was developed through iterative focus group discussions with parents of premature infants and TOP interventionists (pediatric physiotherapists). The content of the app was developed through co-creation by professionals and was subsequently refined and adapted to improve its practical use. In Phase 2, parents with a preterm born infant who participated in the TOP-program received access to the e-TOP app for six months. During and after six months, usability was assessed with a range of quantitative and qualitative measurements, including thinking aloud sessions, online analytics, questionnaires including the System Usability Scale (SUS) and semi-structured interviews. Results: The collaborative approach with end-users and experts for the development led to a fully functional e-TOP app. Expert review and content validation ensured that information was accurate, accessible, and relevant for parents. For the usability testing, 58 families (116 participants) were recruited, and 69 participants actively used the app. The cumulative e-TOP median (IQR) usage per participant for 26 weeks was 39 minutes (IQR: Q1 8.8 - Q3 53.0). The median number of actions was 64.0 (IQR: 33.5-88.0). The eTOP app received a median SUS score of 75 (IQR: 67.5–80.0), indicating good usability. Participants rated their overall satisfaction with the app with a median of 7.0 (IQR: 7.0–8.0) out of 10. While the app was perceived as useful for finding information on prematurity-specific topics, engagement declined over time. The interviews highlighted a need for improved navigation (e.g., search function), expanded content (e.g. practical exercises, sensory processing), and more interactive features (e.g., chat support, parental community forums). Conclusions: The e-TOP app is a valuable, digital resource that can supplement post-discharge care by providing tailored, evidence-based, and accessible information for parents of premature infants. While usability scores were high, engagement trends and retrieved feedback suggested the need for enhanced retention strategies, such as push notifications, time-line based navigation structure, interactive tools, and adding (practical) content. Clinical Trial: ISRCTN65709138
Background: Nursing is a stressful and threatening occupation, and nurses face different stressors and risks during their professional practice, such as the coronavirus disease 2019 (COVID-19) pandemi...
Background: Nursing is a stressful and threatening occupation, and nurses face different stressors and risks during their professional practice, such as the coronavirus disease 2019 (COVID-19) pandemic. Therefore, they need great self-care ability to improve and protect their health. Spiritual self-care (SSC) is one of the main aspects of self-care. However, there is limited scientific evidence about the process of SSC formation among nurses. The present study will be conducted to develop the theory of Nurses’ SSC in Biological Events and strategies to improve nurses’ SSC in biological events. Objective: The present study will be conducted to develop the theory of Nurses’ SSC in Biological Events and strategies to improve nurses’ SSC in biological events. Methods: This multi-methods study will be conducted using the grounded theory method, Systematic Scoping Review, and the Delphi technique. Participants will be hospital nurses with experience in patient care provision during the COVID-19 pandemic and will be selected through purposeful and theoretical sampling. Data will be collected through semi-structured interviews and will be analyzed through Corbin and Strauss’s method. Then, a Systematic Scoping Review will be conducted to determine the experiences of nurses in other countries. The findings of the grounded theory study and Systematic Scoping Review will be provided to a panel of experts, and their comments will be gathered to develop the best strategies to improve nurses' SSC in the form of a policy brief. Results: The findings of this study will be the theory of Nurses’ SSC in Biological Events and a policy brief containing strategies to improve nurses’ SSC in biological events. Conclusions: The findings of this study can be used in nursing practice, education, and research to improve nurses’ SSC in biological events.
Background: The proliferation of social media has reshaped healthcare decision-making, enabling patients to access diverse information sources beyond traditional referrals. In maxillofacial surgery, w...
Background: The proliferation of social media has reshaped healthcare decision-making, enabling patients to access diverse information sources beyond traditional referrals. In maxillofacial surgery, where trust and expertise are critical, the interplay between digital platforms and conventional networks remains underexplored, particularly in non-Western settings like Iran. Understanding how patients navigate these channels offers insights into evolving healthcare behaviors and informs strategies to enhance patient-centered care. Objective: This study aimed to evaluate the influence of social media platforms (Google and Instagram) compared to personal recommendations on maxillofacial surgeon selection among Iranian patients, assessing decision-making factors, trust perceptions, and concerns about information accuracy. Methods: A cross-sectional survey was conducted with 384 patients at maxillofacial surgery clinics in Isfahan, Iran, in autumn 2023. Data on demographics, pathways to surgeon selection, social media use (Google and Instagram), decision-making factors, trust perceptions and concerns about information accuracy were collected via structured questionnaires. Descriptive statistics and one-sample t-tests assessed the impact and reliability of digital platforms. Results: Personal recommendations dominated surgeon selection (62.2%), far surpassing Google (19.5%) and Instagram (2.9%). While 41.7% and 31.0% of patients used Google and Instagram, respectively, their impact on decision-making was significantly below average (p < 0.001). Patient-generated content (e.g., reviews: 37.5% for Google, 40.9% for Instagram) and professional credentials (30.2% for Google) were pivotal in decision-making and trust, yet moderate concerns about accuracy underscored skepticism toward digital sources. The majority of participants were female (60.7%), aged 21–30 (30.5%), employed (41.4%), and without prior surgery (53.4%). Conclusions: Social media plays a supplementary rather than primary role in maxillofacial surgeon selection in Iran, with traditional networks retaining primacy. The reliance on credible patient feedback and credentials highlights the need for verified online content to enhance trust. These findings, contrasting with higher digital reliance in aesthetic surgery contexts, suggest cultural and procedural influences on platform use, advocating for strategies to bridge digital credibility gaps in healthcare decision-making.
Background: Osteoarthritis (OA) is a chronic degenerative joint condition and is the 15th major cause of disability worldwide. Family physicians play a significant role in managing these patients; the...
Background: Osteoarthritis (OA) is a chronic degenerative joint condition and is the 15th major cause of disability worldwide. Family physicians play a significant role in managing these patients; their up-to-date knowledge is essential for evidence-based management. Objective: This study assesses family physicians’ knowledge, attitude, and practice toward OA management. Furthermore, it explores knowledge gaps and discrepancies in practice and compares them with similar studies in the Arabian Peninsula. Methods: We conducted a cross-sectional online survey at Primary Healthcare Corporation (PHCC), Qatar. We sent a targeted online survey link via PHCC intranet email to 724 family physicians working across twenty-eight health centers in Qatar. Results: About 100 family physicians responded to the survey. Out of 100, 75 (75%) were male, 59 out of 100 (59%) were consultants, and the average age of respondents was 48 (SD 7.1). Overall knowledge of family physicians was 76.7%, exhibiting a positive attitude and good practice. A substantial majority of family physicians, 78 out of 100 (78%), acknowledged that OA adversely affects patients’ mental well-being, leading to anxiety and concern. 75 out of 100 (75%) of the participants believed they had adequate training to manage OA. 88 out of 100 (88%) family physicians frequently recommended non-pharmacological management approaches, particularly weight loss. Oral non-steroidal anti-inflammatory drugs (NSAIDs) were offered (75%) most of the time by general practitioners compared to specialists (16.7%) (P=.019). Notably, female physicians exhibited significantly higher utilization rates of pharmacological treatments, which include topical capsicum (P=.013), topical NSAIDs (P=.048), and oral NSAIDs (P=.049), and non-pharmacological treatment like thermotherapy (P=.011). Conclusions: Overall, this study found that PHCC family physicians’ knowledge, attitude, and practice in managing OA were good. However, targeted educational interventions are required, along with professional development programs, to promote evidence-based practices and address gender disparities in prescribing. Future research is necessary to delve deeper into the factors that contribute to the existing gaps in prescribing behavior between male and female physicians. Enhancing OA management further can lead to better patient outcomes and improved quality of care.
Background: Effective management of cardiometabolic conditions requires sustained positive nutrition habits, often hindered by complex and individualized barriers. Direct human management is simply no...
Background: Effective management of cardiometabolic conditions requires sustained positive nutrition habits, often hindered by complex and individualized barriers. Direct human management is simply not scalable, while deterministic automated approaches to nutrition coaching may lack the personalization needed to address these diverse challenges. Objective: We report the development and validation of a novel large language model (LLM)-powered agentic workflow designed to provide personalized nutrition coaching by directly identifying and mitigating patient-specific barriers. Methods: We used behavioral science principles to create a comprehensive workflow that can map nutrition-related barriers to corresponding evidence-based strategies. First, a specialized LLM agent intentionally probes for and identifies root causes of a patient’s dietary struggles. Subsequently, a separate LLM agent delivers tailored tactics designed to overcome those specific barriers. We conducted a user study with individuals with cardiometabolic conditions (N=16) to inform our workflow design and then validated our approach through an additional user study (n=6). We also conducted a large-scale simulation study, grounding on real patient vignettes and expert-validated metrics, where human experts evaluated the system’s performance across multiple scenarios and domains. Results: In our user study, the system accurately identified barriers and provided personalized guidance. Five out of 6 participants agreed that the LLM agent helped them recognize obstacles preventing them from being healthier, and all participants strongly agreed that the advice felt personalized to their situation. In our simulation study, experts agreed that the LLM agent accurately identified primary barriers in more than 90% of cases. Additionally, experts determined that the workflow delivered personalized and actionable tactics empathetically, with average ratings of 4.17-4.79 on a 5-point Likert scale. Conclusions: Our findings demonstrate the potential of this LLM-powered agentic workflow to improve nutrition coaching by providing personalized, scalable, and behaviorally-informed interventions. Clinical Trial: NA
Background: The global incidence of spinal cord injury (SCI) is between 10 and 80 new cases per million people each year, with more traumatic injuries occurring than non-traumatic. This equates to bet...
Background: The global incidence of spinal cord injury (SCI) is between 10 and 80 new cases per million people each year, with more traumatic injuries occurring than non-traumatic. This equates to between 250,000 and 500,000 injuries worldwide, per year. In the UK it is estimated that 4400 people per year sustain a SCI. People with tetraplegia report upper limb function as their highest priority for improvement after SCI. Using immersive virtual reality (VR) headsets, physical rehabilitation exercises can be completed in engaging digital environments. Immersive VR therefore has the potential to increase the amount of therapy undertaken, leading to improvements in arm and hand function. There is little evidence supporting immersive VR as exercise in SCI, especially while SCI patients are undergoing acute rehabilitation. This study recruited people with tetraplegia and therapists to establish the design direction for a VR-based upper limb exercise platform. In spinal cord injury research, co-design of new interventions is not a widely adopted approach, yet people with SCI want to contribute with their expert knowledge on their experiences of SCI. Objective: To explore the lived experiences of people with tetraplegia and specialist SCI therapists related to acute upper limb rehabilitation and to co-design immersive virtual reality-based upper limb activities. Methods: Seven focus groups were conducted online using Microsoft Teams: four with people with tetraplegia (n = 15, age range 36-65 years) and three with occupational therapists and physiotherapists specialising in spinal cord injury rehabilitation (n = 11). Participants were asked to discuss their experiences and expertise about acute SCI upper limb rehabilitation and their opinions on the use of VR for upper limb rehabilitation. The transcripts were analysed using content analysis enabling the proposition of design characteristics of a VR-based intervention for upper limb exercise. Results: The study identified five major themes describing the clinical features, treatment, and recovery of spinal cord injured people during the acute stage of SCI, and suggestions for the design of a VR intervention in treating the upper limbs following SCI. The results highlighted what motivates people with SCI to participate in therapy and how these motivators could be encouraged and maintained using VR. These findings can be used to design accessible VR applications for use by people with SCI and their therapists. They can also contribute to the better understanding of the advantages of using VR as an adjunct to upper limb rehabilitation, as well as features of VR-based interventions to avoid. Conclusions: The themes identified in this study allow the elicitation of software requirements for a bespoke immersive VR platform for upper limb rehabilitation following spinal cord injury. Additionally, participants used their expertise to suggest factors that would enable the development of a usable and effective intervention as well as identifying potential pitfalls and software features to avoid during the intervention development.
Background: Sub-Saharan Africa (SSA) has the highest global burden of under-five child mortality, with congenital heart disease (CHD) being a major contributor. Despite advancements in high-income cou...
Background: Sub-Saharan Africa (SSA) has the highest global burden of under-five child mortality, with congenital heart disease (CHD) being a major contributor. Despite advancements in high-income countries, CHD-related mortality remains unchanged in SSA due to limited diagnostic capacity and centralized healthcare. While pulse oximetry supports early detection, confirmation of diagnosis often relies on echocardiography, a procedure hindered by a shortage of specialized personnel. Artificial intelligence (AI) offers a promising solution to address this diagnostic gap. Objective: This study aims to develop an AI-assisted echocardiography system that will enable non-expert operators such as nurses midwifes and medical doctors to perform basic cardiac ultrasound sweeps on neonates suspected of CHD and extract accurate cardiac images that can be sent to a remote paediatric cardiologist for interpretation Methods: The study will follow a two-phase approach to develop a deep learning model for real-time cardiac view detection in neonatal echocardiography, using data from St Padre Pio Hospital in Cameroon and the Red Cross War Memorial Children’s Hospital in South Africa ensuring demographic diversity. Phase one will pretrain the model on retrospective data from ~500 neonates (0–28 days old). Phase two will fine-tune it using prospective data from 1,000 neonates, which includes background elements absent in the retrospective set, enabling adaptation to local clinical environments. The datasets will include short and continuous echocardiographic video clips covering ten standard cardiac views, as defined by the American Society of Echocardiography. The model architecture will leverage convolutional neural networks (CNNs) and convolutional Long Short-Term Memory (convLSTM) layers, inspired by the interleaved visual memory framework, which combines fast and slow feature extractors through a shared temporal memory mechanism. Video preprocessing, annotation with predefined cardiac view codes using Labelbox, and training with TensorFlow and PyTorch will be conducted. Reinforcement learning will guide the dynamic use of feature extractors during training. Iterative refinement, supported by clinical input, will ensure the model effectively distinguishes correct from incorrect views in real-time, enhancing usability in resource-limited settings. Results: Retrospective data collection for the project began in September 2024, and since then, data from 308 babies have been collected and labelled. In parallel, the initial model framework has been developed and training initiated using a small portion of the labelled data. The project is currently at the intensive execution phase with all objectives running in parallel and final results expected within 14 months. Conclusions: The AI-assisted echocardiography model that will be developed in this project holds promise for improving early CHD diagnosis and care in SSA and other low resource settings.
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages...
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages vs. 63% agricultural deficits) and systemic inequities in education and vocational access. Despite growing HRM interventions, empirical evidence on their efficacy remains limited, necessitating a comprehensive review to guide policy. Objective: This study analyzes Nigeria’s sector-specific skills gaps, evaluates the effectiveness of HRM interventions (apprenticeships, digital upskilling, PPPs), and proposes actionable frameworks to align workforce development with labor market demands. Methods: A narrative review of peer-reviewed literature (2015–2023), institutional reports (World Bank, PwC, NBS), and case studies (e.g., Andela’s model) was conducted. Data were synthesized to compare regional benchmarks (Kenya’s TVET, South Africa’s HRM reforms) and Nigeria’s performance (talent readiness score: 42/100). Results: Key findings include: (1) Vocational training (60% readiness) outperforms tertiary education (40%); (2) Apprenticeships and PPPs show high impact (30% job placement increase); (3) Urban-rural and gender disparities persist (women 30% less likely to access training). Private-sector models demonstrate scalability but require policy support. Conclusions: Nigeria’s skills crisis demands urgent, context-sensitive interventions. Blended strategies (e.g., industry-aligned curricula, gender-inclusive vocational programs) could unlock 5% annual GDP growth. Prioritize: (1) National skills councils to standardize certifications; (2) Tax incentives for employer-led training; (3) Digital infrastructure for rural upskilling. Closing Nigeria’s skills gaps would mitigate economic losses, reduce inequality, and enhance global competitiveness, transforming its youth bulge into a sustainable demographic dividend.
Background: The usability of medical device software (SW) significantly influences the clinical utility and effectiveness of treatment. Importantly, user experience (UX) with cognitive rehabilitation...
Background: The usability of medical device software (SW) significantly influences the clinical utility and effectiveness of treatment. Importantly, user experience (UX) with cognitive rehabilitation SW can directly affect treatment adherence and outcomes. Objective: To evaluate the usability of tablet-based cognitive rehabilitation SW and identify areas for improvement of the user-centered UX/user interface (UI). Methods: A formative evaluation utilizing cognitive walkthrough and the system usability scale (SUS) was conducted with occupational therapists, the primary users of the SW. User errors that may potentially occur during system navigation and interface issues were analyzed, whereas the usability level was quantitatively evaluated based on the SUS. Results: We identified the following key areas for improvement: adjustment of keypad size and position, addition of information before cognitive development evaluation, enhancement of the evaluation completion method, improvement of user manual accessibility, and refinement of the hint button and difficulty level adjustment interface. The mean SUS score was 73.5 points (B- grade), indicating an overall “Acceptable” usability level. Furthermore, the need for improvement was also identified in some UX/UI elements (e.g., utility, complexity, integration, unity, and satisfaction). To address these issues, this study proposed measures for enhancing UX/UI, including improving UI intuitiveness, optimizing the evaluation process, and improving user manual accessibility. Conclusions: The measures proposed for identifying the usability tasks of digital rehabilitation SW and improving the UX may contribute to optimizing the UX of medical device SW and increase the prospect of clinical adoption.
Background: The polymorphisms of protein or protein family are the divergences of amino acid and nucleotide sequences which provided much useful information on the divergent evolution of proteins.RNA...
Background: The polymorphisms of protein or protein family are the divergences of amino acid and nucleotide sequences which provided much useful information on the divergent evolution of proteins.RNA viruses, such as hepatitis C virus (HCV), influenza virus, and SARS-CoV-2, are notorious for their ability to evolve rapidly under selection in novel environments. It is known that the high mutation rate of RNA viruses can generate huge genetic diversity to facilitate viral adaptation. For hepatitis C virus (HCV), The vast diversity of HCV sequences is mainly due to the error-prone RNA polymerase. RNA viruses offer a unique opportunity for the experimental study of molecular evolution. Objective: In this paper, I analyzed the polymorphisms of enzyme NS5B of HCV for which sequence variation amongst most isolates have been characterized and protein structures of the catalytic domain form of this enzyme are also known. Protein structure acts as a general constraint on the evolution of viral proteins. One widely recognized structural constraint explaining evolutionary variation among sites is the relative solvent accessibility (RSA) of residues in the folded protein. The relative solvent accessibility, which measures the extent to which amino acid side chains are exposed on the surface of the protein or buried within the protein structure, has been shown to predict site-wise evolution in eukaryotes, bacteria and some viral proteins. However, to what extent RSA in the protein can be used more generally to explain protein adaptation in other viruses and in different proteins of any given virus remains an open question. Methods: Multiple protein and nucleotide sequences were collected from databank. Statistical analysis were used to analyse polymorphisms with structural characterics.Structure function relationship were studied for this protein. Results: I found that protein sequence polymorphisms are correlated with residue solvent accessibility. Apart from polymorphism, I found conservatism at every level among site is universal for this protein.I also found that purifying selection at different levels was strong in the forming of the polymorphism and conservatism of this protein. Conclusions: Despite the high mutation rate owning to its error-prone RNA polymerase, there is still considerable conservation of virus encoded NS5B protein (and its other encoded proteins, data not shown) among all genotypes found all over the world and there is strong conservative within every genotype and subtype, showing that the selection against deleterious mutations is strong and these purifying selection is the predominant form of selection at the molecular level. Simmonds P proposed that there were constraints on RNA virus evolution at the virus RNA secondary structure level, my study of the nature of factors determining levels its ‘ecological niche’ in the human liver which include the variation of virus proteins also show that there are strong constraints on sequence change of this virus (encoded proteins) both on protein structural and functional and protein stability levels.
Background: Students pursuing health professional education, are preparing for professions where they will interact with people in vulnerable life situations. Therefore, it is crucial that these stude...
Background: Students pursuing health professional education, are preparing for professions where they will interact with people in vulnerable life situations. Therefore, it is crucial that these students develop knowledge and skills in interaction, communication, and guidance during their education. In the context of health professions, this can be described as developing therapeutic competencies. Utilizing virtual learning environments, 360-degree video, and VR technology, the resource allows students to explore, observe, and practice therapeutic conversations in a safe setting. Objective: This study explores the use of a virtual learning resource in the development of therapeutic competence among students training to become health care professionals. Methods: The study was set up using an approach inspired by action research, in the sense that researchers have been closely involved in the development and testing process. An important prerequisite was to facilitate systematic development and improvement based on students' experiences with using the learning resource.
The testing of the learning resource was conducted with two different test groups, recruited from study program leaders at a faculty of health science. A total of twelve students participated. The students were interviewed in focus groups. Results: The results indicated that students experienced increased engagement and learning outcomes compared to traditional teaching methods. They reported that the interactive approach provided a deeper understanding of complex topics, such as legislative frameworks and therapeutic practice, and that the resource promoted the development of practical skills. Conclusions: The study concludes that VR technology can be valuable in healthcare education, helping to prepare students for challenges in professional practice.
Background: Digital phenotyping refers to the objective measurement of human behavior via devices such as smartphones or watches and constitutes a promising advancement in personalized medicine. Digit...
Background: Digital phenotyping refers to the objective measurement of human behavior via devices such as smartphones or watches and constitutes a promising advancement in personalized medicine. Digital phenotypes derived from heart rate, mobility, or sleep schedule data utilized in psychiatry to either diagnose individuals with psychotic disorders, or to predict relapse as a binary outcome. Machine learning models so far have achieved predictive accuracies that are significant but have not large enough for clinical applications. This could hinge on broad clinical definitions, which encompass heterogenous symptom and sign ensembles, thus hindering accurate classification. The five-factor model for the Positive and Negative Symptom Scale (PANNS), which entails five independently varying dimensions, is thought to better capture symptom variability. Utilizing the specific definitions of this refined clinical taxonomy in combination with digital phenotypes could yield more precise results. Objective: The present study aims to investigate potential links between digital phenotypes and each dimension of the five-factor PANNS model. We also assess whether clinical, demographic and medication variables confound said relations. Methods: In the E-prevention study, heart rate, accelerometer, gyroscope and sleep schedule data were continuously collected via smartwatch for a maximum of 24 months, in 38 patients with psychotic spectrum disorders. Obtaining the mean and standard deviation for each patient-month, resulted in a database consisting of more than 740 monthly data points. A linear mixed model analysis was used to ascertain connections between monthly aggregated heart rate and mobility features and the 5 symptom dimension scores of PANNS, obtained during monthly clinical interviews. Results: The positive symptom dimension was associated with increased sympathetic and decreased parasympathetic tone, while the negative dimension was mainly connected to decreased mobility during wakefulness. For the excitement/hostility and the depression/anxiety dimension we report an increase in motor activity during sleep while only excitement/hostility was related to increase in sympathetic heart activation and decreased sleep. The cognitive/disorganization dimension was related to decreased variability in sympathetic activation during wakefulness. Conclusions: This study provides evidence that biological changes assessed by continuous measurement of digital phenotypes could be characteristic of specific symptom clusters rather than entire diagnostic categories of psychotic disorders. These results support the use of digital phenotypes not only as means for remote patient monitoring, but as concrete targets for biomarker research in psychotic disorders.
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access...
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access for critically ill patients, in order to the administer intricate life-saving medications, blood products and parenteral nutrition.
Major vascular catheterization provides a risk of easy accessibility and dissemination of catheter related infections as well as venous thromboembolism. Therefore, its crucial to ensure following standardized practices while insertion and management of CVC in order to minimize the infection risks and procedural complications. The aim of these central line insertion guidelines is to address the primary concerns related to predisposition of Central line associated blood stream infections (CLABSI). These guidelines are evidence based and gathered from pre-existing data associated with CVC insertion.
The most common used sites for central venous catheterization are internal jugular and subclavian veins as compared to femoral veins. Catheterization of these vessels enables healthcare professionals to monitor hemodynamic parameters while ensuring lower risks of CLABSI and thromboembolism. Femoral vein is less preferred due to advantage of invasive hemodynamic monitoring and low risk of local infection and thromboembolic phenomena.
CVC can be inserted using Landmark guided technique and ultrasound guided techniques. Following informed consent, the aseptic technique for CVC insertion includes performing appropriate hand hygiene and ensuring personal protective measures, establishing and maintaining sterile field, preparation of the site using chlorhexidine, and draping the patient in a sterile manner from head to toe. Additionally, the catheter is prepared by pre-flushing and clamping all unused lumens, and the patient is placed in the Trendelenburg position. Throughout the procedure, maintaining a firm grasp on the guide wire is essential, which is subsequently removed post-procedure. It is followed by flushing and aspirating blood from all lumens, applying sterile caps, and confirming venous placement. Procedure is ended with cleaning the catheter site with chlorhexidine, and application of a sterile dressing.
Hence, formal training and knowledge of standardized practices of CVC insertion is essential for health care professionals in order to prevent CLABSI. Our audit assesses the current practices of doctors working at a tertiary care hospital to analyze their background knowledge of standard practices to prevent CLABSI during insertion of CVC. Objective: This study was aimed to audit and re-audit residents’ practices of central venous line insertion in medical and nephrology units of A Tertiary Care Hospital of Rawalpindi, Pakistan and to assess the adherence of residents to checklist and practice guidelines of CVC insertion implemented by John Hopkins Hospital and American Society of Anesthesiologists. Methods: This audit was conducted as a cross sectional direct observational study and two-phase quality improvement project in the Medical and Nephrology Units of a Tertiary Care Hospital of Rawalpindi from December 2023 to February 2024.
After taking informed consent from patients and residents, CVC insertion in 34 patients by 34 individual residents was observed. Observers were given a purposely designed observational tool made from John Hopkins Medicine checklist and ASA practice guidelines for central line insertion, for assessment of residents’ practices.
First part contained questions regarding the demographic details of residents such as age, gender, year of post graduate training, and parent department, and data related to the procedure such as date and time of procedure, need of CVC discussion during rounds, site of CVC insertion, catheter type and type of procedure (Landmark guided CVC or Ultrasound guided CVC insertion). Second part included direct observational checklist based on checklist provided for prevention of intravascular catheter-associated bloodstream infections to audit the practices of residents during CVC insertion that included: adequate hand hygiene before insertion, adherence to aseptic techniques, using sterile personal protective equipment and sterile full body drape of patient, choosing the best insertion site to minimize infections based on patient characteristics.
The parameters observed to be done completely were scored "1" and the items not done were scored "0". The cumulative percentage of performed practices according to checklist, was satisfactory if it was 80% or more and unsatisfactory if it was less than 80%.
After initial audit, participants were given pamphlets with checklist incorporating John Hopkins Medicine checklist and ASA practice guidelines for CVC insertion. Re audit was performed one month after the audit, including same participants who participated in initial audit. The results of audit and re-audit were analyzed using SPSS version 25. Mean +/- SD was calculated for quantitative variables and Number (N) percentage was calculated for qualitative variables. Z- Test was applied on proportions of parameters and test scores to calculate Z –score and P value (<0.05 was significant). Results: Among the 34 participants, 44% of the participants belonged to Nephrology Department and 56% of participants belonged to Department of Internal Medicine.
32.3% residents were in their first year, 14.7% in second, 14.7 in third year, 17.6% in fourth year and 17.6% in 5th/Final year of training.
47% of the participants were male and 53% were female. Participants were aged between 27 and 34 years old, the median age at the time of audit was 29 years.
Landmark guided CVC insertion was performed in Subclavian Vein (73.5%) and Internal Jugular Vein (26.5%).
Post audit practices were improved from 73.5% to 94%. Conclusions: Our audit found that many of the residents adopted inadequate practices because of lack of proper training and institutional guidelines for CVC insertion. Our re-audit elaborated an improvement in the practices of residents following intervention with educational material. Our study underscores the importance of structured quality improvement initiatives in enhancing clinical practices and patient outcomes.
Background: Labor shortages in healthcare pose significant challenges to sustaining high-quality care for people with intellectual disabilities (PwID). Social robots show promise in supporting both Pw...
Background: Labor shortages in healthcare pose significant challenges to sustaining high-quality care for people with intellectual disabilities (PwID). Social robots show promise in supporting both PwID and their healthcare professionals, yet few are fully developed and embedded in productive care environments. Implementation of such technologies is inherently complex, requiring careful examination of facilitators and barriers influencing sustained use. Objective: This research aims to evaluate the value creation and implementation of social robot Ivy for PwID and healthcare professionals, examining facilitators and barriers to sustained use across six care organizations. Methods: A qualitative field study was conducted involving 19 cases of robot implementation across six care organizations in the Netherlands; each case consisted of PwID (client) and the involved healthcare professionals. The study examined actual robot deployment in daily care practice between April-October 2023. Semi-structured interviews were conducted with healthcare professionals after two months of implementation. Analysis followed a thematic approach guided by the NASSS framework and a model for tracing facilitators of and barriers to the adaptive implementation of the robot. Facilitators were classified as key drivers (complex), enablers (complicated), minor benefits (simple), while barriers were categorized as deal-breakers (complex), obstacles (complicated), or minor hurdles (simple). Robot’s sustained use (i.e., robot use continuance at two months post-implementation) served as a key indicator of success. Results: After two months, robot use was sustained in 12 of 19 cases (63%). For successful cases, key value emerged for both clients (enhanced daily structure, improved emotional well-being through non-judgmental interactions, increased independence) and healthcare professionals (reduced workload through automation, improved quality of client interactions, reduced emotional burden). Sustained use was determined by client characteristics (cognitive capabilities, care predictability), healthcare professional factors (available time, digital competency), contextual conditions (timing, connectivity), and organizational support (training, resources). Main implementation barriers included complex/unpredictable care needs, insufficient programming time, and contextual factors influencing care environments. Conclusions: The findings inform long-term care organizations on the implementation and value of sustained use of social robot Ivy for both PwID and their caregivers. Social robot Ivy demonstrates potential for supporting care delivery to PwID when implemented under appropriate conditions. Success requires careful matching of robot capabilities with client needs, sufficient time and support for healthcare professionals, and stable care environments. Future research should examine longer-term sustainability and integrate direct client feedback.
Background: To validate the application of the Human Activity Profile (HAP) questionnaire via telephone call in patients with cardiovascular disease (CVD). Objective: Objective of this study is to inv...
Background: To validate the application of the Human Activity Profile (HAP) questionnaire via telephone call in patients with cardiovascular disease (CVD). Objective: Objective of this study is to investigate the validity of applying the HAP questionnaire via telephone call to patients with cardiovascular disease participating in a cardiovascular rehabilitation program. Methods: Two scores were calculated based upon HAP scores: the maximum activity score (MAS), the number of the most difficult task the respondent is “still doing”; and the adjusted activity score (AAS), the number of items that the individual “stopped doing”, prior to the last one that he “still does”. Patients with CVD answered the HAP questionnaire on 2 random occasions, face-to-face and by telephone call. Results: Fifty-six patients with CVD (64.30% men) with a mean age of 75.14±10.28 years participated in this study. The MAS was similar in both applied modes (Face-to-face: 79.11±11.48; Telephone call: 82.71±7.48; p=0.101). Similarly, The AAS did not differ in both applied modes (Face-to-face: 69.11±14.18; Phone call: 71.21±13.43; p=052). There was high agreement between the two modes of administration (ICC - 0.999; 95%CI, 0.879-0.948; p<0.05). The mean bias and 95% limits of agreement evaluated by Bland-Altman plot for average applied face-to-face and the telephone call versus the mean difference in the MAS and AAS applied face-to-face and the telephone call were, respectively, -4.0 (95%CI 12.1 to -19.3) and -2.1 (95%CI 13.4 to -17.6). Conclusions: The MAS and AAS from HAP can be applied over the telephone call in patients with CVD. Clinical Trial: CAAE 58283422.0.0000.5134 (number: 5.646.387)
Background: Growing evidence indicates an association between cognitive impairments and sleep disorders, two highly prevalent public health conditions. However, the causality and direction of this ass...
Background: Growing evidence indicates an association between cognitive impairments and sleep disorders, two highly prevalent public health conditions. However, the causality and direction of this association are not clear. Objective: we conducted a two-sample MR analysis with the use of publicly available GWAS data, in order to assess the potential causal association between sleep disorder and cognitive performance. Methods: Genome-wide association study summary data of cognitive performance in 260354 participants from the Social Science Genetic Association Consortium Publication were used to identify general cognitive performance. Data for 386533 individuals taken from the Complex Trait Genetics Lab were used to identify insomnia disorder. Data for 345552 individuals taken from the Complex Trait Genetics Lab were used to identify morningness. The weighted median, inverse-variance weighting, and Mendelian randomization-Egger methods were used for the mendelian randomization analysis to estimate a causal effect and detect directional pleiotropy. Results: GWAS summary data were obtained from three combined samples, containing 260354, 386533, 345552 individuals of European ancestry. Mendelian randomization evidence suggested that cognitive performance decreased the onset of insomnia disorder (P<0.001), and cognitive performance could also be decreased by the morningness (P<0.05). In contrast, there were no reliable results to describe the relationship of insomnia disorder on cognitive performance and cognitive performance on morningness (P>0.05). Conclusions: Using large-scale GWAS data, robust evidence supports causal relationships of cognitive performance on insomnia disorder and morningness on cognitive performance, but no relationship of insomnia disorder on cognitive performance and cognitive performance on morningness was observed. This study indicates a potential marker for the early identification of sleep disorder, while also offering possible explanations for previously conflicting results.
Background: Biotinidase deficiency is a metabolic disorder that is included in newborn screening programmes. This condition prevents individuals from performing biotin metabolism properly and can lead...
Background: Biotinidase deficiency is a metabolic disorder that is included in newborn screening programmes. This condition prevents individuals from performing biotin metabolism properly and can lead to serious health problems when left untreated. It is important that health resources are understandable and accessible for families to be informed about the disease and to be directed to the right treatment. Objective: This study evaluates the readability and understandability of online resources on biotinidase deficiency, a metabolic disorder included in newborn screening programs. The aim is to determine whether these materials meet health literacy standards. Methods: Fifty online documents were initially identified via Google searches using “biotinidase deficiency.” After excluding academic articles, duplicates, and inaccessible resources, 21 documents were analyzed. They were categorized as nonprofit (13) or private (8) based on domain extensions. Readability was assessed using Readable.io, providing Flesch Reading Ease scores. The Patient Education Materials Assessment Tool (PEMAT) was used to evaluate understandability and actionability, with scores averaged by four reviewers. Statistical analyses compared group differences. Results: Private articles had significantly higher Flesch Reading Ease scores, indicating more difficult readability (mean ± SD: 13.9 ± 2.2 vs. 10.7 ± 2.0; p = 0.002). PEMAT understanding (U) scores showed no significant difference between private and nonprofit articles (mean ± SD: 52.0 ± 10.5 vs. 42.3 ± 11.4; p = 0.060). Similarly, actionability (A) scores were not significantly different (mean ± SD: 29.1 ± 20.0 vs. 13.4 ± 18.0; p = 0.063). Articles with lower readability (levels D and E) had significantly lower actionability scores compared to higher readability articles (levels A to C). Conclusions: Most online biotinidase deficiency resources fail to meet health literacy standards, particularly in readability and actionability. Improving clarity and usability is essential to better support families managing this condition. The study emphasizes patient-centered approaches for creating effective health education materials.
Background: Patients with multimorbidity have complex healthcare needs and are at high-risk for adverse health outcomes. Primary care teams need tools to effectively and proactively plan care for thes...
Background: Patients with multimorbidity have complex healthcare needs and are at high-risk for adverse health outcomes. Primary care teams need tools to effectively and proactively plan care for these patients. We developed VET-PATHS (VETeran PAnel management Tool for High-risk Subgroups), a novel care planning informatics tool for complex primary care patients. VET-PATHS a) groups patients by chronic condition profile via latent class analysis of electronic health record (EHR) data, then b) jumpstarts care planning by suggesting ‘care steps’ based on data-driven high-priority care for the group, indicated as not receive by EHR. Objective: Iteratively adapt VET-PATHS with user input, then test feasibility and acceptability of tool use by frontline primary care teams for their empaneled high-risk patients. Methods: Three rounds of user-centered design sessions with 17 primary care providers and registered nurses were held at 5 sites from 2019-2021, for feedback on VET-PATHS layout, content, and user-interface. Feedback was summarized into 4 user experience domains (useful, desirable, credible, and usable), leading to progressively updated prototypes. After national tool release, we conducted a pilot intervention study in 2023-2024 with 6 primary care teams at 4 sites. Teams used VET-PATHS during asynchronous regular meetings. Tool use and resulting care plans were assessed by templated observation during meetings and post-pilot chart review. Individual qualitative interviews were analyzed by rapid template analysis for themes of feasibility, acceptability, and utility. Results: User-centered feedback led to updates in tool content, context (e.g., use in proactive panel management), targeting of users (e.g., focusing on primary care providers as the principal users), and layout of informational displays. Pilot intervention teams used VET-PATHS over 4-8 weekly team meetings (mean length, 24 min (range 16-49m)), in which they actively reviewed 80% (280/351) of empaneled high-risk patients visible in the tool. Tool use prompted teams to plan 127 new actions for 91 unique patients (33% of patients reviewed), and document >1 new care plans for 19% of patients reviewed. Common actions included requests to return to clinic (27%), referrals (20%), or vaccinations (19%). Of actions planned, 53 (42%) were received by patients. Four teams with general patient panels (n=11 interviews) described higher acceptability. Two ‘focused’ teams with smaller more homogenous patient panels, e.g. substance use disorder, (n=3 interviews) found care steps less useful. Teams described how VET-PATHS improved efficiency of care planning through automated patient grouping and identification of care gaps, and increased multidisciplinary role involvement. Conclusions: User-centered improvements to VET-PATHS were designed to help clinicians process and use complex information about patient multimorbidity to efficiently create new care plans. In subsequent production, VET-PATHS was acceptable and feasible to use by frontline primary care teams, particularly with larger, more heterogenous patient panels, and led to concrete changes to clinical care delivery. Clinical Trial: N/A
Background: Chronic pain management in older adults can be challenging for primary care clinicians because of comorbidities, side effects, and complicated guideline recommendations. Clinical decision...
Background: Chronic pain management in older adults can be challenging for primary care clinicians because of comorbidities, side effects, and complicated guideline recommendations. Clinical decision support systems (CDSS) can enhance guideline adherence in chronic pain management by collecting and organizing patient information and aiding clinician decision-making. This study examined clinicians’ views on challenges in managing chronic pain and their opinions on a CDSS that gathered patient preferences and provided clinicians with decision support for chronic pain management. Objective: The objective of this study was to explore primary care clinicians’ perspectives on the challenges of managing chronic pain in older adults and evaluate their opinions on a clinical decision support system (CDSS) designed to gather patient preferences and facilitate guideline-based, multimodal pain management. Methods: We conducted semi-structured interviews with 18 clinicians from two University of Chicago Medicine primary care clinics piloting the CDSS. The interview guide was informed by the Consolidated Framework for Implementation Research. Results: Participants included 89% physicians and 11% advanced practice nurses. Participants stressed the importance of a comprehensive, patient-centered approach to chronic pain management and favored multimodal and non-pharmacological treatments. Challenges included complex medical histories, competing priorities, insurance limitations, and opioid misuse concerns. Clinicians found the CDSS beneficial for promoting multimodal care discussions and enhancing visit efficiency. However, there were concerns regarding its complexity, workflow compatibility, and older patients' technology navigation difficulties. While tools, such as the pre-visit questionnaire and conversation tool, were valued, clinicians emphasized the need for adaptability and streamlined usability. Conclusions: The primary care clinicians in this study were aligned with clinical practice guidelines to provide patient-centered pain management using multimodal treatments. However, they had several concerns regarding how complex chronic pain management can be for older adult patients. They expressed interest in using the CDSS but were concerned about its complexity. I-COPE offers a promising approach to support guideline-based chronic pain and opioid management in primary care. By addressing usability and workflow compatibility, CDSS tools like I-COPE can better equip clinicians to provide comprehensive, patient-centered care, ultimately enhancing treatment outcomes for older adults with chronic pain.
Background: In Bangladesh as well as throughout the world, children's screen time has significantly increased. Children spent a lot of time on the internet and digital screens for entertainment, educa...
Background: In Bangladesh as well as throughout the world, children's screen time has significantly increased. Children spent a lot of time on the internet and digital screens for entertainment, education, and communication which have increased their daily screen time. However, the potential detrimental impacts of excessive screen time on children's mental, physical, and social health have drawn attention. Objective: This study aimed to explore the effect of high exposure to screen on health and mental well-being of school-going children in Dhaka, Bangladesh. Methods: From July 2022 to June 2024, this cross-sectional descriptive study was carried out. 420 kids between the ages of 6 and 14 were enrolled in three English-medium and three Bangla-medium schools in Dhaka city using a stratified random sample technique. Anthropometric measurements, a semi-structured questionnaire, the Pittsburgh Sleep Quality Index Scale (PSQI), the Development and Wellbeing Assessment Scale (DAWBA), and the Strength and Difficulties Questionnaire (SDQ), which was validated in Bangla, were used to gather data. We considered the students who were exposed to screen for less than 2 hours as the low-exposed group and those who were exposed for more than 2 hours as high-exposed group. Results: We found 83% of the students were high exposed group and their average screen time was 4.6 ± 2.3 hours. Compared to the low exposed group, the high exposed group had a significantly higher rate of eye problems (96% vs 4%, P< 0.001). Headache was also common in high exposed group (83%). Moreover, students of high exposed group had a short duration and poor quality of sleep which was statistically significant. Furthermore, obesity was more predominant in the high-exposure group (p < 0.001). Our study revealed overall 40% of children suffered from mental health problems by using DAWBA scale which was increased in high exposed group compared to low exposed group. Behavioral problems, such as conduct issues (28.3%) and peer difficulties (28.8%), were observed among the participants. However, there was no statistically significant difference was found between two groups. Conclusions: A collaborative and coordinated multistage approach will be required to create effective and acceptable guidelines and policies for the optimum and positive use of digital screens for the children of Bangladesh. Further prospective studies on larger scales can be conducted to determine the impacts on health aspects meticulously.
Background: Generation Scotland (GS) is a genetic family health cohort study established in 2006 (N~24,000). A new wave of recruitment was initiated in 2022 aimed at adding a further 20,000 new partic...
Background: Generation Scotland (GS) is a genetic family health cohort study established in 2006 (N~24,000). A new wave of recruitment was initiated in 2022 aimed at adding a further 20,000 new participants to the cohort using online data collection and remote saliva sampling (for genotyping and DNA methylation profiling). Eligible individuals included anyone living in Scotland aged over 12 years. New participants give consent for linkage to their medical and administrative records, and to provide a saliva sample for DNA. The current study evaluates the different strategies employed to recruit new participants to the GS cohort for those aged 16+ (recruitment of ages 12-15 will be presented separately due to additional strategies). Objective: This study aimed to evaluate recruitment strategies employed to recruit new participants to the GS cohort. Recruitment strategies were compared in terms of overall numbers as well as sociodemographic characteristics of the recruits, sample return rates and cost-effectiveness. Methods: From May 2022 to the end of December 2023 recruitment was undertaken by the following methods: snowball recruitment (through friends and family of existing volunteers), invitations to those who participated in a previous survey during the pandemic (CovidLife: the GS COVID-19 impact survey) and Scotland-wide recruitment through social media (including sponsored Meta advertisements), news media and TV advertisement. Method of recruitment was self-reported by participants in the baseline questionnaire. Results: Over the above period, 7,889 new participants were recruited to the cohort. According to the different strategies, this included, in order: social media (N=2,436, 30.9%), CovidLife survey responder invitations (N=2,049, 26.0%), TV advertising (N=1,367, 17.3%), snowball (N=891, 11.3%), news media (N=747, 9.5%) and other methods/unknown (N=399, 5.0%). More females signed up than males (70.5% female participants). To date, 83.5% of participants have returned their postal saliva sample. Sample return varied between demographic factors (>60 years 90.5% vs 16-34 years 71.1%). The average cost per participant across all recruitment strategies was £13.52. Past survey invitations (CovidLife) were most cost-effective at £0.37 per recruit, social media cost £14.78 per recruit, whilst TV advertisement recruitment was the most expensive at £33.67. Conclusions: We present the challenges and successes of recruitment of new participants to a large ongoing cohort using remote assessment. Besides targeting existing survey responders, social media advertising has been the most cost-effective and easily sustained strategy for recruitment. We note different strategies resulted in successful recruitment over varying timescales (e.g. consistent sustained recruitment for social media, and large spikes for news media and TV advertising) which may be informative for future studies with different requirements of recruitment periods. Limitations include self-reported methods of recruitment, and difficulties in capturing multi-layered recruitment. Overall, these data demonstrate the potential cost requirements and effectiveness of different strategies that could be applied to future research studies. Future work will report success and challenges of recruitment activities aimed at younger individuals, under 16 years.
Background: Lassa fever, an acute viral hemorrhagic illness endemic in West Africa, remains a significant public health concern in Nigeria, particularly in Edo, Ondo, and Kwara States. Despite recurre...
Background: Lassa fever, an acute viral hemorrhagic illness endemic in West Africa, remains a significant public health concern in Nigeria, particularly in Edo, Ondo, and Kwara States. Despite recurrent outbreaks, limited data exist on the knowledge, attitudes, and practices (KAP) of residents and healthcare personnel across these states, creating a critical research gap. Effective prevention and control require a thorough understanding of these factors to inform targeted interventions and policy decisions. Objective: This study aimed to assess the KAP of residents and primary healthcare (PHC) personnel regarding Lassa fever across Edo, Ondo, and Kwara States. Specifically, it examined awareness levels, preventive behaviors, misconceptions about transmission, and compliance with infection control measures, including the use of personal protective equipment (PPE). The findings provide insights for evidence-based interventions to reduce the burden of Lassa fever in these endemic regions. Methods: A cross-sectional survey was conducted among 3,582 residents and 540 PHC personnel across Edo, Ondo, and Kwara States. Data were collected through structured questionnaires assessing knowledge, attitudes, and practices related to Lassa fever. Statistical analyses, including cross-tabulations and the Relative Importance Index (RII), were employed to identify patterns and disparities across different residential and professional groups. Results: Among residents, 80.1% recognized Lassa fever as a severe illness, yet only 6.9% had participated in awareness campaigns. Preventive behaviors were inadequate, with only 12.1% storing food in rodent-proof containers and 25.4% engaging in frequent environmental sanitation. Knowledge gaps persisted, as only 3% were aware of the disease’s 1–21-day incubation period, and 0.3% acknowledged sexual transmission. Socioeconomic disparities significantly influenced compliance with sanitation measures (p < 0.001), with higher-income households demonstrating better adherence. Furthermore, preventive practices such as using traps (14.5%) and participating in sanitation campaigns (6.8%) varied significantly by residence type (p < 0.001). PHC personnel demonstrated strong theoretical knowledge, with an RII of 0.960 for key facts, including the classification of Lassa fever as a viral hemorrhagic illness and the identification of rats as primary reservoirs. However, only 84% recognized alternative reservoirs such as bats and mosquitoes. PPE adherence was poor, particularly for facemasks and eye protection (RII = 0.217), highlighting significant gaps in infection control practices. Conclusions: The study reveals critical gaps in awareness, preventive behaviors, and infection control measures across Edo, Ondo, and Kwara States. While healthcare workers displayed strong theoretical knowledge, practical compliance with PPE use was insufficient, posing a risk of disease transmission. Addressing these gaps is essential for effective Lassa fever control. Targeted health education campaigns should be implemented to enhance public awareness and dispel misconceptions about Lassa fever transmission. Strengthened training programs for PHC personnel, stricter PPE compliance policies, and improved access to sanitation resources should be prioritized. Additionally, community-based interventions, including regular environmental sanitation and rodent control, should be encouraged to reduce exposure risks. Bridging the knowledge and practice gaps in Lassa fever prevention is essential to mitigating outbreaks, reducing fatalities, and strengthening public health resilience in Edo, Ondo, Kwara States, and other endemic regions.
Background: Large Language Models (LLMs) continue to enjoy enterprise-wide adoption in healthcare while evolving in number, size, complexity, cost, and more importantly performance. Performance benchm...
Background: Large Language Models (LLMs) continue to enjoy enterprise-wide adoption in healthcare while evolving in number, size, complexity, cost, and more importantly performance. Performance benchmarks play a critical role in their ranking across community leaderboards and subsequent adoption. Objective: Given the small operating margins of healthcare organizations and growing interest in LLMs and conversational AI, there is an urgent need for objective approaches that can assist in identifying viable LLMs without compromising their performance. The objective of the present study is to generate a taxonomy portrait of medical LLMs (N = 33) whose domain-specific and domain non-specific multivariate performance benchmarks were available from Open-Medical LLM and Open LLM leaderboards on Hugging Face. Methods: Hierarchical clustering of multivariate performance benchmarks is used to generate taxonomy portraits revealing inherent partitioning of the medical LLMs across diverse tasks. While domain-specific taxonomy is generated using nine performance benchmarks related to medicine from the Hugging Face Open-Medical LLM initiative, domain non-specific taxonomy is presented in tandem to assess their performance on a set of six benchmarks on generic tasks from the Hugging Face Open LLM initiative. Subsequently, non-parametric Wilcoxon Ranksum test and linear correlation is used to assess differential changes in the performance benchmarks between two broad groups of LLMs and potential redundancies between the benchmarks. Results: Two broad families of LLMs with statistically significant differences (\alpha = 0.05) in performance benchmarks are identified for each of the taxonomies. Consensus in their performance on the domain-specific, and domain non-specific tasks revealed inherent robustness of these LLMs across diverse tasks. Subsequently, statistically significant correlations between performance benchmarks revealed inherent redundancies, indicating a subset of these benchmarks may be sufficient in assessing the domain-specific performance of medical LLMs. Conclusions: Understanding the medical LLM taxonomies is an important step in identifying LLMs with similar performance while aligning with the needs, economics, and other demands of healthcare organizations. While the focus of the present study is on a subset of medical LLMs from the Hugging Face, enhanced transparency of performance benchmarks and economics across a larger family of medical LLMs is needed to generate more comprehensive taxonomy portraits accelerating their strategic and equitable adoption in healthcare. Clinical Trial: Not applicable
Background: The utility of online engagement in enhancing quality of life and mitigating social isolation among older adults is well-documented. However, its effects on cognitive functions, mainly thr...
Background: The utility of online engagement in enhancing quality of life and mitigating social isolation among older adults is well-documented. However, its effects on cognitive functions, mainly through online social engagement, require further exploration. Objective: This study investigates the potential of active online engagement via a Virtual Senior Center (VSC) to enhance subjective memory capability among older adults, thereby potentially improving their psychological well-being and loneliness. Methods: Utilizing a cohort of 53 homebound older adults participating in the VSC program, which offers diverse online classes to promote social interaction, using path analysis to investigate the relationship between online engagement, subjective memory capabilities, the quality of social relationships, and overall well-being. Results: The findings reveal that increased participation in VSC activities is significantly associated with improved subjective memory capability. Conclusions: This enhanced self-assessment of memory capability is linked to a better quality of life and reduced loneliness. Although online engagement has no direct association, these indirect effects suggest the critical role of positive subjective memory capability, fostered through online engagement, in enriching social interactions. It posits the potential of digital platforms to augment traditional methods of socialization, especially for those contending with physical or geographical barriers to interaction.
Background: Harmful suicide content on the internet poses significant risks, as it can induce suicidal thoughts and behaviors, particularly among vulnerable populations. Despite global efforts, existi...
Background: Harmful suicide content on the internet poses significant risks, as it can induce suicidal thoughts and behaviors, particularly among vulnerable populations. Despite global efforts, existing moderation approaches remain insufficient, especially in high-risk regions like South Korea, which has the highest suicide rate among OECD countries. Previous research has primarily focused on assessing the suicide risk of individuals rather than the harmfulness of content itself, highlighting a gap in automated detection systems for harmful suicide content. Objective: In this study, we aimed to develop an AI-driven system for classifying online suicide-related content into five levels: illegal, harmful, potentially harmful, harmless, and non-suicide-related. Additionally, the researchers construct a multi-modal bench- mark dataset with expert annotations to improve content moderation and assist AI models in detecting and regulating harmful content more effectively. Methods: We collected 43,244 user-generated posts from various online sources, including social media, Q&A platforms, and online communities. To reduce the workload on human annotators, GPT-4 was used for pre-annotation, filtering and categorizing content before manual review by medical professionals. A task description document ensured consistency in classification. Ultimately, a benchmark dataset of 452 manually labeled entries was developed, including both Korean and English versions, to support AI-based moderation. The study also evaluated zero-shot and few-shot learning to determine the best AI approach for detecting harmful content. Results: The multi-modal benchmark dataset showed that GPT-4 achieved the highest F1 scores (66.46 for illegal and 77.09 for harmful content detection). Image descriptions improved classification accuracy, while directly using raw images slightly decreased performance. Few-shot learning significantly enhanced detection, demonstrating that small but high-quality datasets could improve AI-driven moderation. However, translation challenges were observed, particularly in suicide-related slang and abbreviations, which were sometimes inaccurately conveyed in the English benchmark. Conclusions: This study provides a high-quality benchmark for AI-based suicide content detection, proving that LLMs can effectively assist in content moderation while reducing the burden on human moderators. Future work will focus on enhancing real-time detection and improving the handling of subtle or disguised harmful content.
Background: Low birth weight (LBW) is linked to higher risks of neonatal morbidity, developmental delays, and long-term health issues. Although baby massage has been proven to enhance growth and neuro...
Background: Low birth weight (LBW) is linked to higher risks of neonatal morbidity, developmental delays, and long-term health issues. Although baby massage has been proven to enhance growth and neuro-developmental outcomes in LBW infants, access to suitable training and compliance with massage regimes continue to be challenges, especially in low-resource settings. Mobile health (mHealth) interventions represent a novel solution for addressing these challenges through the provision of scalable and standardized education in Baby Massage. Objective: This study aimed to design and test a human-centered baby massage mobile application (app) to promote the growth and development of LBW infants in Indonesia. Methods: We used a human-centered iterative design framework to create the mobile application. System Usability Scale (SUS) was used to assess the usability of the application among 42 caregivers of LBW infants. Feedback was collected qualitatively through semi-structured interviews, and thematic analysis was applied to understand user experience. A pilot study assessed the impact of the application on caregiver knowledge, confidence, adherence to baby massage practices, and infant growth outcomes. Weight pre- and post-intervention assessments were compared using paired t-tests. Results: The mobile application was found to have high usability, evidenced by an average SUS score of 78.6 (SD = 8.2). The majority of participants (85%) rated the app as "excellent" or "good" with respect to ease of use and navigation. Qualitative feedback emphasized its effectiveness at increasing caregiver confidence and its cultural relevance. Statistical analyses from the pilot study showed significant gains in caregiver knowledge (+20.5 points, p < 0.01), confidence (+21.6 points, p < 0.01), and adherence to baby massage practices (+20.3 points, p < 0.05). Statistically significant weight (+330 grams, p < 0.01) and head circumference (+1.3 cm, p < 0.01) improvement was observed in infants in the intervention group. Conclusions: The human-centered baby massage mobile application showed promising evidence of feasibility and effectiveness in increasing knowledge, confidence, and adherence to baby massage practices by caregivers. The application features a culturally tailored and user-friendly design, which makes it accessible, especially in low-resource settings. This was a small-scale study and future work should involve scaling the intervention as well as evaluating the long-term effects on infant health outcomes.
Background: Hypertension is a prevalent concern among older adults, and when left uncontrolled, it can lead to complex cardiovascular complications. Tele-nursing technology facilitates self-management...
Background: Hypertension is a prevalent concern among older adults, and when left uncontrolled, it can lead to complex cardiovascular complications. Tele-nursing technology facilitates self-management, empowering older adults with uncontrolled hypertension to regulate their behaviors and achieve sustainable disease control. Objective: This study aimed to study the effects of self-health monitoring using smart devices and ontology technology on disease-controlling behavior and mean arterial pressure of older adults with uncontrolled hypertension. Methods: The design was quasi-experimental research. The sample was older adults with uncontrolled hypertension who lived in Bangkok, Thailand, which was divided into 46 experimental people and 45 control people. The implementation tools were a program of self-health monitoring using smart devices and ontology technology. It featured the "HT GeriCare@STOU" application on smartphones that was linked to detecting blood pressure from smartwatches, and telenursing could be provided through the application and video calls. The data-collecting questionnaires had a Cronbach's alpha coefficient of 0.83 and a content validity index of 0.98 for disease-controlling behavior. Descriptive and t-test statistics analyzed the data. Results: The results revealed that after joining the program, the disease-controlling behavior of older adults with uncontrolled hypertension was better than before joining the program, but was not better than the comparison group at P<.05. However, the mean arterial pressure of older adults with uncontrolled hypertension was lower than before joining the program and lower than the comparison group at P<.05. Conclusions: A program of self-health monitoring using smart devices and ontology technology was effective for older adults with uncontrolled hypertension. The technological and cost problems are potential obstacles to eHealth programs. More experimental and longitudinal studies with larger sample sizes are needed to properly evaluate this program. Clinical Trial: Trial Registry Number: Thai Clinical Trials Registry (TCTR20250110003)
Background: Approximately one third of epilepsy patients are resistant to anti-seizure medication (ASM). There are currently no mobile devices that allow early detection of seizures. Hypothesized that...
Background: Approximately one third of epilepsy patients are resistant to anti-seizure medication (ASM). There are currently no mobile devices that allow early detection of seizures. Hypothesized that the use of an intra-aural EEG device (mjn-SERAS) will allow the activity recording and the subsequent processing by the AI algorithm of MJN to anticipate the event of suffering an epileptic seizure in those patients already diagnosed previously, generating an alert to prevent accidents. Objective: To assess the epilepsy-related quality of life in patients with drug-resistant epilepsy using the mjn-SERAS solution compared to the control group
To assess the seizure-related safety in patients with drug-resistant epilepsy using the mjn-SERAS solution compared to the control group in terms of the number of accidents caused by seizure episodes Methods: A prospective, multicentre, pilot clinical trial, with a controlled and randomized design, is proposed to validate a medical device (mjn-SERAS), CE certificated. This new validation will be in the participant's normalized environment, in individuals over 2 years of age, with a diagnosis of refractory epilepsy, which will make it possible to determine the impact of the mjn-SERAS device on the early detection of epileptic seizures and the generation of a pre-seizure alert with a time window of a minimum of 1 minute. The sample size determined is an n=150 exposed individuals who meet the inclusion criteria. The sensitivity, specificity, positive predictive value, PPV and F-Score of the device will be analysed. Also, the degree of satisfaction of patients and their caregivers, including the impact on quality of life and the degree of health perceived by the caregiver when alarms are generated to assess the possibility of a new epileptic seizure. Finally, to describe possible improvements in indicators of social relationships in different areas of personal development. Results: This study is funded in 2022 by the EIT Health and European Union, under the programme EIT Health Amplifier n. 220445-230126.
As of February 2025, we enrolled 76 patients in 6 clinical sites in Spain, UK and Germany. Data analysis is currently underway, and the first results are expected for June 2025. Conclusions: The mjn-SERAS device, an intra-aural EEG, aims to record brain activity and use artificial intelligence (AI) algorithms to anticipate seizures in previously diagnosed patients. By generating early alerts, it allows individuals to take preventive measures and enhance safety. Although participants may not experience direct benefits, validating or improving this technology could enhance future epilepsy management and treatment.
Unlike previous research efforts, mjn-SERAS is the first device to systematically provide seizure alerts using an AI-based algorithm to detect early warning signs. Its real-world application could significantly improve the quality of life for epilepsy patients and advance medical understanding of seizure prediction.
This study evaluates the device’s accuracy in predicting seizures in everyday settings and assesses its psychological, mental, and social impact on people with refractory epilepsy. Clinical Trial: ClinicalTrials.gov NCT05845255; https://clinicaltrials.gov/study/NCT05845255
Background: Background: Dermatology, as a frontier medical discipline integrating clinical practice and cross-disciplinary innovation, has undergone dynamic evolution driven by technological advanceme...
Background: Background: Dermatology, as a frontier medical discipline integrating clinical practice and cross-disciplinary innovation, has undergone dynamic evolution driven by technological advancements in recent years. However, persistent challenges in workforce development and educational frameworks necessitate systematic evaluation and analysis of current research frontiers and emerging trends to inform strategic reforms in dermatology education. Objective: Objective: This study evaluates global trends in dermatology medical education over the past decade, offering insights to enhance education and professional development in the field. Methods: Methods: A systematic review of literature published between 2014 and 2023 was conducted using the Web of Science (WOS) Core Collection database. CiteSpace 6.1.R6 software was used for bibliometric analysis, including author contributions, institutional output, keyword clustering, and burst detection. Results: Results: The United States and the United Kingdom dominate dermatology education research. Key research focuses include disease prevention, comprehensive management, and postgraduate education. Emerging topics involve subspecialty preferences and cosmetic dermatological surgery. Conclusions: Conclusion: Dermatology education is shifting towards integrating multidisciplinary knowledge, addressing comorbidities, and emphasizing subspecialty training to meet the increasing complexity of clinical practice.
Background: Adolescents living with HIV (ALHIV) often experience poor anti-retroviral therapy (ART) outcomes due to multiple barriers affecting medication adherence. Effective self-care interventions...
Background: Adolescents living with HIV (ALHIV) often experience poor anti-retroviral therapy (ART) outcomes due to multiple barriers affecting medication adherence. Effective self-care interventions are needed to address these challenges. Mobile phones are widely used by the adolescent population, thus present an opportunity to be used as a tool to enhance ART adherence using mobile phones as a targeted intervention. However, research on ALHIV's mobile phone access, usage patterns, and perceptions of mobile phone-based interventions is limited in Eswatini. Objective: This study explored the mobile phone access, usage patterns, and perceptions of mobile phone-based interventions among ALHIV in Eswatini to inform effective mobile health strategies for enhancing ART adherence among ALHIV. Methods: We conducted a qualitative study using in-depth interviews in the month of December 2023. A total of 29 ALHIV aged 10 to 19 years and enrolled on ART were purposively sampled were interviewed from five Teen Clubs in the Hhohho region in Eswatini. Topic areas covered were “mobile phone accessibility, usage patterns, and perceptions on the use of mobile phones to facilitate ART adherence.” Results: The study findings indicated high mobile phone access among participants, with primary usage focused on making and receiving calls, as well as engaging with social media. Three themes emerged regarding the use of gamified interventions to support ART adherence. Firstly, the use of gamified interventions aimed at ART adherence among ALHIV was deemed feasible, based on mobile phone access and past experiences with mobile game. Secondly, three main qualities of successful gamified interventions were identified as: being supportive, educational, and ensuring privacy in the design of the game. Lastly, confidentiality and mobile phone access factors were highlighted as potential concerns when designing gamified ART adherence interventions. Conclusions: The findings suggest potentially high access and usage of mobile phones among ALHIV on ART in Eswatini. This provides an opportunity to leverage mobile technology to enhance ART adherence through gamified interventions. However, it is essential to carefully consider ALHIV-specific needs and concerns in the design of these interventions to ensure their successful uptake and sustainability.
Background: Aggression and violence are prevalent in forensic psychiatric inpatient care. These behaviours significantly impact treatment outcomes, work environments for staff, and strain relationship...
Background: Aggression and violence are prevalent in forensic psychiatric inpatient care. These behaviours significantly impact treatment outcomes, work environments for staff, and strain relationships among patients and caregivers. Managing such behaviours poses a formidable challenge that necessitates innovative approaches and evidence-based interventions. Objective: The aim of this project is to evaluate the violence prevention method Therapeutic Meeting with Aggression (TERMA) regarding perceived safety by patients and staff and adverse events within forensic psychiatric inpatient care. Additionally, the project will investigate whether the organisational culture affect the implementation of the TERMA method. Methods: The project includes an observational study with a before and after design. Implementation of TERMA consists of an eight-seminar staff training program. Data sources include questionnaires, medical records, and registries. Quantitative data will be analysed with descriptive and comparative statistics. The project will also include qualitative interview studies. Results: Participant enrollment began in February 2024 and will continue through 2025. Data collection and analysis are expected to be completed by early 2026, after which the study findings will be submitted for publication in peer-reviewed scientific journals. Conclusions: Patients and staff at a forensic psychiatry inpatient facility in Western Sweden. Clinical Trial: NCT05932108
Background: The growing population of cancer survivors faces persistent physical and emotional challenges that significantly impact health-related quality of life (HRQL). To address these multifaceted...
Background: The growing population of cancer survivors faces persistent physical and emotional challenges that significantly impact health-related quality of life (HRQL). To address these multifaceted needs, robust and culturally adapted patient-reported outcome measures, such as the Measure Yourself Concerns and Wellbeing (MYCaW®) questionnaire, are essential for understanding and improving survivors’ subjective experiences. Objective: This protocol outlines the systematic translation and cultural adaptation of the Measure Yourself Concerns and Wellbeing (MYCaW®) questionnaire into German. The MYCaW® questionnaire, a patient-reported outcome measure, is designed to capture individualized concerns and assess overall well-being, particularly in cancer care settings. By adhering to common guidelines, this research will provide a tool for assessing individualized concerns and patient needs among German-speaking cancer patients. Methods: The present study is approved by the ethics committee of the Medical Association Berlin with the reference number Eth-27/10. Following International Society for Pharmacoeconomics and Outcomes Research (ISPOR) guidelines, this study will employ a structured methodology involving forward and backward translation, expert review, patient review process, and preliminary validation to ensure linguistic and cultural equivalence. A standardized coding framework will be developed for analyzing patient concerns, with inter-rater reliability assessed to ensure consistency. Results: The trial is ongoing, with recruitment expected to conclude in the second quarter 2025 and follow-up by year-end. Data analysis is planned for the third quarter 2025, with findings published in the fourth quarter 2025. Results will be presented at conferences, submitted to journals, and the study will conclude in December 2025. The final German MYCaW® version is expected to maintain the conceptual integrity of the original while being accessible and meaningful for German-speaking oncology patients. Conclusions: The translation and adaptation of MYCaW® into German will contribute to expanding the availability of validated patient-reported outcome measures for German-speaking populations. By following rigorous international guidelines, this study aims to produce a reliable and culturally appropriate tool for assessing patient concerns and well-being in oncology and supportive care settings. Future validation studies will be necessary to assess the psychometric properties of the adapted questionnaire and its applicability in clinical and research contexts. Potential challenges, such as maintaining conceptual equivalence in translation and ensuring broad representativeness in the validation process, will be addressed through iterative refinement. Once validated, the German MYCaW® will provide a valuable resource for patient-centered research and care, helping to capture individualized concerns that might be overlooked by standardized instruments. Clinical Trial: The study is registered at the German Register for Clinical Trials under DRKS00013335 on 27/11/2017.
Background: The study aimed to adapt a stress and well-being intervention delivered via a mobile health (mHealth) app for Latinx Millennial caregivers. This demographic, born between 1981 and 1996, re...
Background: The study aimed to adapt a stress and well-being intervention delivered via a mobile health (mHealth) app for Latinx Millennial caregivers. This demographic, born between 1981 and 1996, represents a significant portion of caregivers in the United States, with unique challenges due to higher mental distress and poorer physical health compared to non-caregivers. Latinx Millennial caregivers face additional barriers, including higher uninsured rates and increased caregiving burdens. Objective: We used a community-informed and user-centered design approach to tailor an existing mHealth app to better meet the stress and well-being needs of Latinx Millennial caregivers. Methods: We employed a two-step, multi-feedback approach. In step one, Latinx Millennial caregivers participated in focus groups to evaluate wireframes for the proposed mHealth app. In step two, participants engaged in usability testing for one week, concluding with short interviews for feedback. Participants were recruited through various channels, including social media and community clinics. Data were analyzed inductively using a rapid qualitative content analysis approach Results: A total of 29 caregivers (69% women, mean age 31) participated in the study. Participants had a mean age of 31 (SD=4.10), with most (n=28, 96%) caring for an adult and one (4%) caring for children with chronic conditions. All participants completed the step one focus groups, with a subset of 3 caregivers completing usability testing in step two. The most liked features included the: 1) stress rating scale because it helped them understand stress and mental health, 2) mindfulness options because it allowed for flexible timing of activities, 3) journaling prompts because it was a way to address daily challenges and contemplate positives, and 4) resource list for its employment and financial content. One concern was that the journaling prompts may take too much time or effort to complete after a long and hard day. Some suggestions for improvement included: a better tracking system, gamification, caregiving education, a checklist of emotions to use on the journal, tailored resources, and ways to connect with a community of similar caregivers. During step two, participants noted the app was user-friendly but had some glitches and unclear privacy policies. Participants liked the meditation options, resource variety, and daily stress log but wanted more journaling space, longer meditations, and additional relaxation activities. Conclusions: Caregivers highlighted the need for tailored resources and additional stress-relief activities. Future iterations should consider integrating more personalized and community-specific resources, leveraging platforms like podcasts for broader engagement, and the use of information-based videos to support caregiver skill acquisition. Caregivers expressed needs beyond the scope of the app, such as resource access, demonstrating the need for upstream and downstream interventions. The study underscores the importance of ongoing user feedback in developing effective mHealth interventions for diverse caregiver populations. Clinical Trial: N/A
Background: EHealth can help healthcare service users take a more active role in decision-making and help healthcare professionals guide the patient in this process. Even though usability and health l...
Background: EHealth can help healthcare service users take a more active role in decision-making and help healthcare professionals guide the patient in this process. Even though usability and health literacy strategies should guide the development of mHealth apps, the number of digital health apps publishing their usability evaluation results is still small Objective: The aim of this study was to explore the perceptions of users of the EMAeHealth digital app about its acceptance, usability, strengths and weaknesses for its implementation. Methods: This is an exploratory sequential mixed-method research study, where a qualitative study was followed by a quantitative study. Qualitative results were collected through individual semi-structured interviews between January and March 2024. Participants were identified through purposive sampling. Saturation was reached once 10 interviews had been carried out. In the quantitative part, 106 out of 400 responded to an anonymous online survey created ad hoc in December 2024 based on the results of the qualitative study. Results: Two categories were drawn up during the analysis: (1) “The best thing about this app”, with accessibility, quantity, quality, good organization of the information and the credibility of the source among the reasons for this positive evaluation. (2) “What could be improved”: here, participants considered the app had the potential to become essential if new functions related to healthcare provision were incorporated by linking it to individual health records, as well as making the app more individualized in some different aspects such as receiving notifications. Women commented on the lack of opportunities to share experiences with other women in the same situation and consequently the opportunity to develop networks. The survey gave similar results, with both positive assessments and areas for improvement Conclusions: Although women value the accessibility and reliability of an app designed by the public healthcare service positively, areas for improvement were seen, such as the combination of digital intervention with face-to-face care and, above all, the individualization and adaptation of information, notifications or recommendations to the culture, health situation, or stage of each woman
Background: Post-market surveillance (PMS) is essential for medical device safety, requiring systematic mapping of adverse events from scientific literature to standardized terminologies like the Inte...
Background: Post-market surveillance (PMS) is essential for medical device safety, requiring systematic mapping of adverse events from scientific literature to standardized terminologies like the International Medical Device Regulators Forum (IMDRF) Adverse Event (AE) Terminology. This process faces challenges in maintaining semantic interoperability across data sources. Objective: This study evaluates whether large language models (LLMs) can effectively automate the mapping of adverse events from orthopedic literature to IMDRF terminology. Methods: A validation approach assessed LLM performance using 309 randomly selected adverse events (23.6% of 1,251 unique events) from orthopedic literature published between 2010-2023. The events were previously mapped by the Harms Mapping Working Group (HMWG) consisting of six Safety Clinicians and seven Safety Coders with extensive clinical and industry experience. Structured prompts were developed following established prompt engineering principles. Accuracy was conservatively measured as correct identification of both appropriate IMDRF terms and codes. Results: LLMs achieved an accuracy rate of 82.52% (255/309 events correctly mapped). Error analysis revealed challenges with AEs lacking sufficient context, gaps in specialized clinical knowledge, and occasional inferential overreach. Concordance between independent Safety Clinician evaluators was complete. Conclusions: While LLMs show promise as assistive tools for AE mapping, they require expert oversight. The findings support a two-stage workflow where LLMs provide initial mapping followed by clinician verification, potentially improving efficiency without compromising quality. Future research should explore enhanced prompt engineering, expanded dictionary integration, and more sophisticated models to address identified limitations.
Background: Groundwater contamination poses a significant public health risk, particularly in urban areas with inadequate waste management. Dumpsites serve as major sources of pollutants, including he...
Background: Groundwater contamination poses a significant public health risk, particularly in urban areas with inadequate waste management. Dumpsites serve as major sources of pollutants, including heavy metals, which infiltrate aquifers through leachate migration. Port Harcourt, Nigeria, faces increasing groundwater quality concerns due to the proliferation of uncontrolled waste disposal sites. Objective: This study aims to evaluate the spatial and seasonal variations in groundwater quality around dumpsites in Port Harcourt and determine the suitability of groundwater for drinking based on WQI values. It also seeks to identify contamination patterns and assess the influence of rainfall on pollutant dispersion. Furthermore, the study compares findings with global research to establish broader implications for waste management and public health. By doing so, it provides a scientific basis for policy recommendations aimed at mitigating groundwater pollution. Methods: Groundwater samples were collected from various locations around major dumpsites in Port Harcourt during dry and rainy seasons. Physicochemical parameters, including heavy metal concentrations, were analyzed to compute WQI values. Comparative analysis with previous studies was conducted to validate observed contamination trends. The impact of leachate migration on water quality was assessed using seasonal variations in WQI values. Results: Findings reveal significant spatial and seasonal fluctuations in groundwater quality. While Choba exhibited excellent water quality, Sasun, Olumeni, and Epirikom recorded dangerously high WQI values, indicating unsuitability for drinking. Seasonal variations showed that rainfall exacerbated contamination levels, as seen in Eleme, where WQI increased from 56.362 in the dry season to 140.928 in the rainy season. The study aligns with previous research from India, China, and Ghana, demonstrating that landfill leachates and surface runoff are key contributors to groundwater degradation. Conclusions: The study confirms that dumpsite leachates significantly impact groundwater quality, posing a major risk to public health. The high WQI values in several locations highlight the need for urgent interventions. Findings align with global research on groundwater contamination, emphasizing the critical role of effective waste management in reducing environmental pollution. To mitigate groundwater pollution from dumpsite leachates, it is essential to implement stringent waste management policies that regulate landfill operations and prevent leachate infiltration into aquifers. Establishing continuous groundwater monitoring programs can help detect contamination trends early and guide timely intervention measures. Additionally, promoting alternative potable water sources in highly contaminated areas is crucial to reducing health risks for affected communities. The adoption of modern landfill technologies, such as leachate treatment and containment systems, should be prioritized to minimize pollution and safeguard water resources for future generations. This study contributes to the growing body of research on groundwater contamination by providing empirical evidence of the impact of dumpsites in an urban African setting. The findings underscore the urgent need for improved waste management policies and public health interventions. By aligning with global research, this study reinforces the importance of sustainable environmental practices to safeguard water resources and protect communities from the adverse effects of pollution.
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking dec...
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking decisions, the effectiveness of social media strategies, and shifts in reputation management practices is crucial for hotels aiming to enhance their digital presence and customer engagement. Objective: The study aims to analyze the influence of social media on consumer behavior, audience engagement, and reputation management in hotel selection and booking decisions as well as compare pre- and post-social media reputation management practices. Methods: Data was collected through surveys and interviews with hotel guests and marketing professionals. The analysis included descriptive statistics and comparative assessments of pre- and post-social media reputation management practices. The effectiveness of various social media strategies was evaluated based on respondent feedback. Results: The findings indicate that promotional offers, user reviews, and visual content significantly influence consumer behavior in hotel selection and booking decisions. Collaboration with influencers, user-generated content, live video content, and social media advertising are the most effective strategies for audience engagement and brand building, each with a 100% effectiveness rate. There is a notable shift in reputation management practices, with a decrease in promptly addressing issues and providing compensation, and an increase in seeking private resolutions through direct messages post-social media. Conclusions: Social media plays a critical role in shaping consumer behavior and brand perception in the hotel industry. Effective social media strategies, particularly those involving influencers and user-generated content, are essential for engaging audiences and building brand identity. The transition to social media has also led to changes in reputation management, emphasizing the importance of balancing transparency with discreet conflict resolution. Hotels should prioritize comprehensive social media strategies that include collaboration with influencers, regular updates, and engaging content. Encouraging positive user-generated content and implementing robust monitoring and response systems are essential. Training staff on social media engagement and conflict resolution can further improve reputation management. Ongoing adaptation to emerging social media trends is crucial for maintaining effectiveness. This study provides valuable insights into the impact of social media on consumer behavior and marketing in the hospitality industry. By identifying effective social media strategies and examining changes in reputation management, it offers practical guidance for hotels seeking to enhance their digital presence and customer engagement. The findings underscore the importance of leveraging social media to achieve greater business success and maintain a positive brand reputation.
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Heal...
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Health implemented the Philippine Package of Essential Non-Communicable Disease Interventions (Phil PEN) to address this issue. However, healthcare professionals faced challenges in implementing the program due to the cumbersome nature of the multiple forms required for patient risk assessment. To address this, a mobile medical app, the PhilPEN Risk Stratification app, was developed for community health workers (CHWs) using the extreme prototyping framework. Objective: This study aimed to assess the usability of the PhilPEN Risk Stratification app using the (User Version) Mobile App Rating Scale (uMARS) and to determine the utility of uMARS in app development. The secondary objective was to achieve an acceptable (>3 rating) score for the app in uMARS, highlighting the significance of quality monitoring through validated metrics in improving the adoption and continuous iterative development of medical mobile apps. Methods: The study employed a qualitative research methodology, including key informant interviews, linguistic validation, and cognitive debriefing. The extreme prototyping framework was used for app development, involving iterative refinement through progressively functional prototypes. CHWs from a designated health center participated in the app development and evaluation process – providing feedback, using the app to collect data from patients, and rating it through uMARS. Results: The uMARS scores for the PhilPEN Risk Stratification app were above average, with an Objective Quality rating of 4.05 and a Personal Opinion/Subjective Quality rating of 3.25. The mobile app also garnered a 3.88-star rating. Under Objective Quality, the app scored well in Functionality (4.19), Aesthetics (4.08), and Information (4.41), indicating its accuracy, ease of use, and provision of high-quality information. The Engagement score (3.53) was lower due to the app's primary focus on healthcare rather than entertainment. Conclusions: The study demonstrated the effectiveness of the extreme prototyping framework in developing a medical mobile app and the utility of uMARS not only as a metric, but also as a guide for authoring high-quality mobile health apps. The uMARS metrics were beneficial in setting developer expectations, identifying strengths and weaknesses, and guiding the iterative improvement of the app. Further assessment with more CHWs and patients is recommended. Clinical Trial: N/A
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in cli...
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in clinical scenarios where guidelines do exist, these guidelines do not universally recommend one treatment option over others. As a result, the limitations of existing guidelines presumably create an inherent variability in provider decision-making and the corresponding distribution of provider behavioral variability in a clinical scenario, and such variability differs across clinical scenarios. We define this variability as a marker of provider uncertainty, where scenarios with a wide distribution of provider behaviors have more uncertainty than scenarios with a narrower provider behavior distribution. We propose four exploratory analyses of provider uncertainty: (1) field-wide overview; (2) subgroup analysis; (3) provider guideline adherence; and (4) pre-/post-intervention evaluation. We also propose that uncertainty analysis can also be used to help guide interventions in focusing on clinical decisions with the highest amounts of provider uncertainty and therefore the greatest opportunity to improve care.
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy wh...
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy when charm fails. Sociopathy, characterized by manipulative tendencies and a lack of empathy, often manifests in adaptive yet harmful behaviors aimed at maintaining control and dominance.
Using the Deenz Antisocial Personality Scale (DAPS-24) to collect data from 34 participants, this study examines the prevalence and interplay of these dual strategies. Findings reveal that sociopaths employ glibness to disarm and manipulate, transitioning to aggressiveness in response to resistance. The implications for understanding sociopathic manipulation are discussed, emphasizing the importance of early detection and intervention in both clinical and social contexts.
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, begi...
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, beginning with
memorialization of the fallen and later to honor aging veterans, commemoration of significant
anniversaries associated with the conflict, memorialization sites of conflict, and celebration of
the actions of military leaders. Sources reveal that during both the Jim Crow and Civil Rights
eras, many were erected as part of an organized propaganda campaign to terrorize African
American communities and distort the past by promoting a ‘Lost Cause’ narrative. Through
subsequent decades, to this day, complex and emotional narratives have surrounded interpretive
legacies of the Civil War. Instruments of commemoration, through both physical and digital intervention approaches, can be provocative and instructive, as the country deals with a slavery legacy and the commemorated objects and spaces surrounding Confederate inheritances.
Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question. Objective: I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Methods: This article addresses several elements within the purview of the Journal: questions of contemporary society, diversity of opinion, recognition of complexity, subject matter of interest to non-specialists, international relevancy, and history. Drawing from the testimony of scholars and artists, I address the contemporary conceptual landscape of approaches to the presentation and evolving participatory narratives of Confederate monuments that range from absolute expungement and removal to more restrained responses such as in situ re-contextualization, removal to museums, and preservation-in-place. In a new era of social justice surrounding the aftermath of dramatic events such as the 2015 Charleston shooting, the 2017 Charlotteville riot, and the murder of George Floyd, should we see them as symbols of oppression, inviting expungement, or selectively as sites of conscience and reflection, inviting various forms of re-interpretation of tangible and intangible relationships?
I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Results: I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Conclusions: Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question.