Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
JMIR Preprints
A preprint server for pre-publication/pre-peer-review preprints intended for community review as well as ahead-of-print (accepted) manuscripts
Background: Tourette’s Syndrome (TS) is an inheritable neurological disorder characterized by repetitive, involuntary movements called “tics”. Predominantly affecting males, TS typically emerges in adolescence, with a lack of definitive diagnostic tests often causing delays in diagnosis. This study utilizes magnetic resonance (MR) imaging to explore brain patterns linked to TS, focusing on the cortico-striatal-thalamic-cortical (CSTC) circuit. This neural pathway, involved in motor control, emotion regulation, and cognition, is believed to play a key role in TS symptoms. Computational methods, including Machine Learning (ML), are used to analyze these images and deepen our understanding of TS. Objective: We aim to assess whether CSTC-based segmentation improves TS classification over whole brain analysis. The study employs Freesurfer and Slant segmentation methods, training VGG16, VGG19, and ResNet50 models. Methods: The study follows four steps: (1) Dataset Organization: 68 T1-weighted MR volumes; (2) Preprocessing and Segmentation: Data enhancement and CSTC related brain region segmentation using Freesurfer and Slant; (3) Data Augmentation: Increasing dataset size with 6 degrees of freedom; (4) Classification: Comparison of whole brain and CSTC based CNN classification. Results: Results show that CSTC based segmentation outperforms whole brain methods (pcorr < 0.001). Using Freesurfer with VGG16 achieves 82% accuracy, while Slant with VGG16 achieves 80.3% accuracy. Thus, CSTC based segmentation shows promise for advancing TS diagnosis. Conclusions: VGG16’s superior performance suggests its balance of depth and parameter count was well suited to our dataset without overfitting, while rigid data augmentation was crucial for increasing sample variability and improving generalization Clinical Trial: This study was not registered as a clinical trial.
Journal Description
JMIR Preprintscontains pre-publication/pre-peer-review preprints intended for community review (FAQ: What are Preprints?). For a list of all preprints under public review click here. The NIH and other organizations and societies encourage investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work. JMIR Publications facilitates this by allowing its authors to expose submitted manuscripts on its preprint server with a simple checkbox when submitting an article, and the preprint server is also open for non-JMIR authors.
With the exception of selected submissions to the JMIR family of journals (where the submitting author opted in for open peer-review, and which are displayed here as well for open peer-review), there is no editor assigning peer-reviewers.
Submissions are open for anybody to peer-review. Once two peer-review reports of reasonable quality have been received, we will send these peer-review reports to the author, and may offer transfer to a partner journal, which has its own editor or editorial board.
The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
If authors want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc) after peer-review, please specify this in the cover letter. Simply rank the journals and we will offer the peer-reviewed manuscript to these editors in the order of your ranking.
If authors do NOT wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter.
JMIR Preprints accepts manuscripts at no costs and without any formatting requirements (but if you intend the submission to be published eventually by a specific journal, it is of advantage to follow their instructions for authors). Authors may even take a WebCite snapshot of a blog post or "grey" online report. However, if the manuscript is already peer-reviewed and formally published elsewhere, please do NOT submit it here (this is a preprint server, not a postprint server!).
JMIR Preprints is a preprint server and "manuscript marketplace" with manuscripts that are intended for community review. Great manuscripts may be snatched up by participating journals which will make offers for publication.There are two pathways for manuscripts to appear here: 1) a submission to a JMIR or partner journal, where the author has checked the "open peer-review" checkbox, 2) Direct submissions to the preprint server.
For the latter, there is no editor assigning peer-reviewers, so authors are encouraged to nominate as many reviewers as possible, and set the setting to "open peer-review". Nominated peer-reviewers should be arms-length. It will also help to tweet about your submission or posting it on your homepage.
For pathway 2, once a sufficient number of reviews has been received (and they are reasonably positive), the manuscript and peer-review reports may be transferred to a partner journal (e.g. JMIR, i-JMR, JMIR Res Protoc, or other journals from participating publishers), whose editor may offer formal publication if the peer-review reports are addressed. The submission fee for that partner journal (if any) will be waived, and transfer of the peer-review reports may mean that the paper does not have to be re-reviewed. Authors will receive a notification when the manuscript has enough reviewers, and at that time can decide if they want to pursue publication in a partner journal.
For pathway 2, if authors do not wish to have the preprint considered in a partner journal (or a specific journal), this should be noted in the cover letter. Also, note if you want to have the paper only considered/forwarded to specific journals, e.g. JMIR, PLOS, PEERJ, BMJ Open, Nature Communications etc), please specify this in the cover letter.
Manuscripts can be in any format. However, an abstract is required in all cases. We highly recommend to have the references in JMIR format (include a PMID) as then our system will automatically assign reviewers based on the references.
Background: Tourette’s Syndrome (TS) is an inheritable neurological disorder characterized by repetitive, involuntary movements called “tics”. Predominantly affecting males, TS typically emerges...
Background: Tourette’s Syndrome (TS) is an inheritable neurological disorder characterized by repetitive, involuntary movements called “tics”. Predominantly affecting males, TS typically emerges in adolescence, with a lack of definitive diagnostic tests often causing delays in diagnosis. This study utilizes magnetic resonance (MR) imaging to explore brain patterns linked to TS, focusing on the cortico-striatal-thalamic-cortical (CSTC) circuit. This neural pathway, involved in motor control, emotion regulation, and cognition, is believed to play a key role in TS symptoms. Computational methods, including Machine Learning (ML), are used to analyze these images and deepen our understanding of TS. Objective: We aim to assess whether CSTC-based segmentation improves TS classification over whole brain analysis. The study employs Freesurfer and Slant segmentation methods, training VGG16, VGG19, and ResNet50 models. Methods: The study follows four steps: (1) Dataset Organization: 68 T1-weighted MR volumes; (2) Preprocessing and Segmentation: Data enhancement and CSTC related brain region segmentation using Freesurfer and Slant; (3) Data Augmentation: Increasing dataset size with 6 degrees of freedom; (4) Classification: Comparison of whole brain and CSTC based CNN classification. Results: Results show that CSTC based segmentation outperforms whole brain methods (pcorr < 0.001). Using Freesurfer with VGG16 achieves 82% accuracy, while Slant with VGG16 achieves 80.3% accuracy. Thus, CSTC based segmentation shows promise for advancing TS diagnosis. Conclusions: VGG16’s superior performance suggests its balance of depth and parameter count was well suited to our dataset without overfitting, while rigid data augmentation was crucial for increasing sample variability and improving generalization Clinical Trial: This study was not registered as a clinical trial.
Background: Uptake of the COVID-19 vaccine has been low in the US (about 22% among adults in 2023-241) despite ongoing public health recommendations. This has been linked to many factors including pan...
Background: Uptake of the COVID-19 vaccine has been low in the US (about 22% among adults in 2023-241) despite ongoing public health recommendations. This has been linked to many factors including pandemic fatigue, reduced risk perception, dis/misinformation, and more recently, to symptoms of depression and anxiety. Novel communication and messaging strategies are one potential approach to promote vaccine uptake. Objective: Randomized control trial testing two communication-based approaches compared to standard public health messaging on vaccine uptake in a cohort of adult US residents. Methods: We completed a 3-arm, parallel-group, assessor-blinded stratified-randomized trial between April-15-2024 and May-2-2024. Eligible individuals were ≥18 years old who: 1) had received at least one dose of the COVID-19 vaccine, but, 2) had not received COVID-19 vaccine doses since September-11-2023, and 3) had not been infected with SARS-CoV-2 in the past three months. We purposively sampled eligible individuals with and without symptoms of anxiety and depression. Participants were randomly allocated to: 1) attitudinal inoculation intervention; 2) CBT-kernels intervention; or 3) standard public health messaging intervention. Results: At four-week follow up, these groups showed no meaningful differences in uptake (CBT- kernels:1.6% [95%CI:0.4-2.8]; Inoculation:0.9% [95%CI:0.0-1.8]; and Standard:1.3% [95%CI:0.3-2.4]) or level of vaccine willingness. Conclusions: Successful efforts to increase uptake of the COVID-19 vaccine via theory-enhanced messaging remain elusive. Clinical Trial: Protocol NCT06119854
Background: Knee osteoarthritis (KOA) is increasingly prevalent in China due to rapid population aging, rising obesity rates, and a large population base, contributing to a growing burden on individua...
Background: Knee osteoarthritis (KOA) is increasingly prevalent in China due to rapid population aging, rising obesity rates, and a large population base, contributing to a growing burden on individuals and society. Despite strong evidence supporting exercise and self-management as first-line KOA treatments, these approaches are poorly implemented in Chinese clinical practice. This implementation gap highlights the need for a feasible, culturally appropriate, and scalable model of care (MoC) for KOA that aligns with China's healthcare system. Objective: This study aimed to evaluate the feasibility of implementing the PEAK-CHN (Physical and Exercise Activity for Knee Osteoarthritis in China) model—a multidisciplinary, telehealth-enabled MoC adapted for Chinese clinical and community health settings. Methods: A parallel, two-arm, randomized controlled pilot trial was conducted using a mixed-methods approach at a community health center and a tertiary hospital in Xiamen, China. A total of 73 adults (mean age 66.4 years; 65.8% female) with symptomatic KOA were recruited from the community and randomized 1:1 to either the intervention (PEAK-CHN) or usual care. The intervention included a 3-month program consisting of five telehealth consultations delivered via the WeChat platform, personalized home-based exercise plan, continuous daily behavior change support, and community-based care for comorbidities. The control group received usual KOA care, with services recorded by the local health insurance management system. Primary outcomes were feasibility variables including recruitment, adherence, perceived benefit, engagement, cost, and practitioners’ working time. Secondary outcomes assessed health status, psychological determinants, and health behaviors at baseline, 3 months, and 6 months. Qualitative semi-structured interviews (n=11) explored participants’ experiences after the intervention. The implementation process of the MoC was evaluated using the Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM) framework. Results: The study achieved rapid recruitment (18 participants per week) and a 100% adherence rate for the intervention. Participants reported high satisfaction (mean score 8.8/10), with 97% perceiving the online exercise consultations, behavioural support, and provided resources as beneficial. Significant between-group improvements were observed in most secondary health outcomes, including pain reduction (mean difference −3.48, 95% CI: −4.25, −2.71). Implementation data indicated high fidelity, strong engagement, and natural integration into routine clinical workflows. Qualitative findings highlighted greater confidence in self-management, improved physical and psychological well-being, and suggestions for more personalized, longer-term support. Conclusions: The PEAK-CHN model is feasible, acceptable, and well-integrated within China’s public healthcare system. It provides evidence-based, best-practice care for KOA patients and demonstrates potential as a scalable, policy-aligned solution. These findings support the need for larger trials to evaluate its clinical effectiveness, cost-effectiveness, and broader implementation in China. Clinical Trial: Chinese Clinical Trial Registry, ChiCTR2400091007
Background: Large language model (LLM)-based AI coaches show promise for personalized exercise and health interventions. Their complex capabilities (communication, planning, movement analysis, monitor...
Background: Large language model (LLM)-based AI coaches show promise for personalized exercise and health interventions. Their complex capabilities (communication, planning, movement analysis, monitoring) necessitate rigorous, multidimensional evaluation, but standardized frameworks are lacking. Objective: This scoping review systematically maps current evaluation strategies for LLM-based AI coaches in exercise and health, identifies strengths and limitations, and proposes future directions for robust, standardized validation. Methods: Following PRISMA-ScR guidelines, we systematically searched six databases using keywords for LLMs, exercise/health coaching, and evaluation. Studies describing LLM-based coaching systems with reported performance evaluation methods were included. Data on models, applications, evaluation strategies, and outcomes were charted. Results: Seventeen studies published between March 2023 and March 2025 met the inclusion criteria. Most utilized proprietary models (e.g., GPT-4), while some used open-source or custom models. Six studies incorporated multimodal inputs (video, sensor data). Evaluation strategies were highly heterogeneous, including quantitative metrics (Accuracy, F1, MAE), empirical methods (user studies, expert comparisons), and expert/user-centered feedback (expert scores [Kappa ≈ 0.79–0.82], user surveys [MITI, SASSI]). However, evaluations often lacked real-world testing, longitudinal assessment, and standardized benchmarks. Conclusions: Evaluating LLM-based exercise and health coaches requires multifaceted strategies: quantitative metrics for objective tasks, empirical validation for user interaction, and expert assessment for personalization and safety. Current evaluations are fragmented, lacking standardization, ecological validity, and longitudinal assessment. Future progress demands robust, multidimensional frameworks emphasizing real-world validation, integrating RAG for accuracy, and developing specialized, efficient multimodal models or agents for reliable and scalable AI coaching.
Background: College students are at heightened risk for mental health problems but often demonstrate low rates of seeking professional help. Although digital mental health tools can improve accessibil...
Background: College students are at heightened risk for mental health problems but often demonstrate low rates of seeking professional help. Although digital mental health tools can improve accessibility and reduce stigma, most are narrowly focused and lack integration with campus-based services. Multi-domain platforms that integrate diverse support features offer personalized, scalable solutions; however, their usability and effectiveness remain largely underexplored. Objective: This study evaluated “Fruto,” a multi-domain digital platform designed to support help-seeking behaviors among university students. We investigated students’ interaction with its integrated features, tracked changes in their attitudes and beliefs over time, and identified design elements that influenced these outcomes. Methods: We conducted a two-phase, mixed-methods study. Phase 1 involved vignette-based semi-structured interviews (n = 16) to explore user experiences with a prototype version of Fruto, with thematic analysis guiding platform refinement. In Phase 2, a single-group pre-post study design was used, involving 70 students who used the app over eight weeks. Surveys assessed help-seeking attitudes, beliefs about counseling, and perceived app quality. Paired t-tests examined pre-post changes, and stepwise regression identified predictors of outcomes. Results: Significant improvements were observed in student’s positive attitudes toward help-seeking (t = -2.89, p = .005) and counseling expectations (t = -2.91, p = .005). However, no significant changes were observed in negative attitudes or socially supportive beliefs. Regression analyses indicated that subjective satisfaction with the app significantly predicted positive help-seeking attitudes (β = 0.227, p < .05), while perceived information credibility predicted positive counseling expectations (β = 0.237, p < .05). Qualitative findings emphasized the importance of trusted content providers, seamless feature integration, and relatable self-discovery content in reducing psychological barriers and enhancing user engagement. Conclusions: Fruto shows potential as a campus-integrated, multi-domain platform that supports student mental health through a user-centered, integrated design. Such platforms may be better equipped to address the evolving and personalized needs of students. Future research should incorporate control groups, long-term follow-up, and objective usage data to confirm efficacy and inform broader implementation. Clinical Trial: Clinical Research Information Service (CRIS) KCT0010622; https://cris.nih.go.kr/cris/search/detailSearch.do?seq=30274&status=5&seq_group=30274&search_page=M
Caffeine consumption is a common strategy to enhance alertness, particularly among medical students managing intense academic demands. This study examines caffeine intake across different stages of me...
Caffeine consumption is a common strategy to enhance alertness, particularly among medical students managing intense academic demands. This study examines caffeine intake across different stages of medical training—first-year (M1), second-year (M2), and third-year (M3) medical students—to determine whether intake increases as students progress. M1–M3 students at a California medical school completed an anonymous survey (8/14/25–8/28/25) on weekly caffeine intake. Likert-scale questions assessed consumption and impact. SPSS 28 was used; nonparametric tests and Spearman’s correlation identified significant differences (adjusted p ≤ 0.05). Caffeine totals were calculated per item.Among 122 respondents, M3s consumed more caffeine from coffee than M1s (p = .028) and M2s (p = .010), and more from OTC drugs than M1s (p = .010) and M2s (p = .006). Higher modified CAGE scores (1–3) were linked to greater caffeine intake than score 0 (p < .001–.040).Caffeine use increased with training level, highest in M3s, likely due to rising demands. Tea remained stable; soft drink use declined. M3s consumed more energy drinks and chocolate. Findings align with stress-related stimulant use. Limitations include single-site, self-report data, and lack of longitudinal or confounding variable control.
Background: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-i...
Background: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity. Objective: To develop a secure, AI-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements. Methods: GrantCheck was built on a private AWS Virtual Private Cloud, integrating a rule-based natural language processing engine with large language models (LLMs) accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale. Results: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.000, recall of 0.73, and an F1 score of 0.84—outperforming general-purpose models including GPT-4o (F1 = 0.43), Deepseek R1 (F1 = 0.40), Llama 3.1 (F1 = 0.27), Gemini 2.5 Flash (F1 = 0.58), and even Gemini 2.5 Pro (F1 = 0.72). Usability testing among 16 faculty and staff participants yielded a mean System Usability Scale (SUS) score of 82.2, indicating a positive user satisfaction with the tool’s interface, functionality, and workflow integration. Conclusions: GrantCheck demonstrates the feasibility of deploying institutionally hosted, AI-driven systems to support compliant and researcher-friendly grant writing. Its hybrid architecture ensures high performance and privacy while reducing administrative burden in navigating shifting language policies.
Background: Right ventricular dysfunction (RVD) is an important predictor of outcomes in patients with heart disease, which has significant prognostic implications for a range of diseases, including i...
Background: Right ventricular dysfunction (RVD) is an important predictor of outcomes in patients with heart disease, which has significant prognostic implications for a range of diseases, including ischaemic and non-ischaemic cardiomyopathy, valvular heart disease, congenital heart disease, and pulmonary hypertension. The routine diagnosis of right heart structural abnormalities at an earlier stage, ideally when only structural abnormalities are present but patients are not yet symptomatic, represents an elusive but critical goal in the field of cardiology. Objective: The objective of this study is to apply deep learning (DL) analysis to chest X-rays (CXRs) in order to accurately detect specific structural abnormalities, thus facilitating the early identification of patients exhibiting RVD and improving outcomes. Methods: Approximately 10,000 adult CXRs with corresponding echocardiographic labels of right ventricular function, including right ventricular fractional area change (RVFAC), tricuspid annular plane systolic excursion (TAPSE) and right ventricular myocardial performance index (RVMPI), are currently being collected within a 12-month period. The study will employ DenseNet-121 for the interpretation of CXRs with the objective of identifying the presence of RVD. The DL model will be evaluated against independently collected and labelled sets of CXRs. Subsequently, three main assessments of the model will be performed: validation of the trained model on a separate dataset of patients who have been treated in The emergency department, validation on another independent test dataset obtained from healthy volunteers, and comparison of the model's performance against that of five radiologists on a sample of CXRs will be conducted. The primary objective of the study protocol is the creation of a DL model that is capable of accurately identifying RVD from inexpensive and prevalent examinations of CXRs. Results: The collected data will be synthesized to evaluate the program's acceptability and the feasibility of study procedures. Ethical approval was obtained in November 2024, followed by the initiation of participant recruitment. Data collection is scheduled to continue through December 2025. Conclusions: This feasibility trial may confirm deep learning can identify right ventricular dysfunction from chest X-rays Clinical Trial: Trial registration number Chinese Clinical Trial Registry, ChiCTR2500095838.
Background: Research has shown that mutations in the KRAS, NRAS, and BRAF genes are linked to resistance to anti-EGFR therapies in colorectal cancer (CRC) patients. HER2-targeted therapies are increas...
Background: Research has shown that mutations in the KRAS, NRAS, and BRAF genes are linked to resistance to anti-EGFR therapies in colorectal cancer (CRC) patients. HER2-targeted therapies are increasingly being recommended for individuals with HER2 overexpression. Objective: The evaluation of KRAS, NRAS, BRAF, and HER2 statuses has become an important part of precise diagnosis for CRC. However, conventional molecular or protein testing can be time-consuming and expensive. This study aims to predict the status of KRAS, NRAS, BRAF, and HER2 through the analysis of whole-slide pathology features from CRC samples stained with Hematoxylin-Eosin (H&E) for KRAS, NRAS, and BRAF, and by utilizing Immunohistochemistry (IHC) for HER2. Methods: In this study, 435 CRC patients were enrolled from Jiangsu Province Hospital of Chinese Medicine. Using the clustering-constrained attention-based multiple-instance learning (CLAM) model, we constructed four models for predicting the statuses of KRAS, NRAS, BRAF, and HER2 based on whole-slide images (WSIs). Results: Our proposed four CLAM models demonstrated encouraging predictive performance, with all AUC values exceeding 0.88. Our model-generated heatmaps showing KRAS, NRAS, BRAF mutation patterns and HER2 expression levels generally matched the regions identified by the pathologists. Conclusions: Our method provides new insights to predict gene mutations and protein expression using deep learning. These predictions can act as a prescreening tool, improving cost efficiency before the use of next-generation sequencing (NGS), amplification refractory mutation system-polymerase chain reaction (ARMS-PCR) and Immunohistochemistry (IHC). This approach ultimately enhances the effectiveness of precision medicine and improves the consistency of quality in physicians’ slide evaluations.
Background: Over 1.9 billion people are classified as being obese or overweight. Digital technologies have varying effects in affecting weight loss in participants. Precision health exploits the benef...
Background: Over 1.9 billion people are classified as being obese or overweight. Digital technologies have varying effects in affecting weight loss in participants. Precision health exploits the benefits of digital health and data by being able to provide highly tailored, or personalised, pathways to service users. Obesity is a multi-morbid condition caused in part by lifestyle risk factors including dietary choices, sleep hygiene, exercise and mental health. Objective: The objective was to assess the effect of participants choosing their health “focus” on joining the NHS-certified Gro Health app to support holistic remote weight management. Methods: Participants were invited to engage with a precision behavioural change tool that addresses the four pillars of health-mental health/wellbeing, nutrition, sleep and exercise-via the Gro Health app. Participants were referred by primary care teams or local authorities and invited to use the tool’s education programmes, access multidisciplinary team (MDT) health coaching and track their health. Gro Health onboards users to self-select a “Focus” on either: Sleep, Exercise, Nutrition, or Wellbeing. Outcome variables-weight (kg), HbA1c (%), PHQ-8 (depression), Karolinska Sleepiness Scale (KSS) and Patient Activation Measure (PAM)-were compared across “focus” groups. Results: A total of 438 participants (mean age: 41.9 ± 13 years; mean starting weight: 95.59 kg ± 5.6) downloaded the app. 72% selected nutrition as their focus. The greatest average weight loss of 7.0 kg was observed in this group. Improvements in sleepiness and depression were highest among those selecting sleep as their focus. Conclusions: User-driven focus selection influences health outcomes. Precision behavioural change tools like Gro Health can support sustainable, tailored health improvements across multiple domains and represent a scalable method for delivering personalised weight management.
Background: Cancer survivors are likely to face physical, mental, financial, social, and emotional difficulties, regardless of whether or when they receive treatment. Many cancer survivors report an i...
Background: Cancer survivors are likely to face physical, mental, financial, social, and emotional difficulties, regardless of whether or when they receive treatment. Many cancer survivors report an inability to understand the explanations of health care professionals as well as other poor communication. However, empirical evidence for such “poor communication” remains scarce. Objective: The purpose of this study was to clarify the information sources that are trusted by cancer survivors according to patient attributes. Specifically, we classified patients according to sex, treatment status, and cancer type to determine the best approach for disseminating appropriate information according to patient trends. Methods: We administered a cross-sectional survey to 350 cancer survivors aged 20–80 years according to the Checklist for Reporting Results of Internet E-Surveys. Items in the preliminary survey included sociodemographic information, “cancer stage,” “current treatment status,” “date of cancer diagnosis,” and “date of termination of cancer treatment” in the preliminary survey, and those in the main survey included “what you have researched about cancer,” “what are your cancer information sources,” “what social media sites or applications do you use to collect cancer information,” “information seeking difficulties,” “reliable information sources,” “intention to use hospital-recommended counseling support and information gathering applications and services,” “advantages of using hospital-recommended counseling support and information gathering applications and services,” “communication with surroundings,” and the Japanese version of the 10-Item Personality Inventory (which measures the Big Five characteristics of extraversion, conscientiousness, agreeableness, openness, and neuroticism). Data were analyzed using latent class analysis (LCA), and Kruskal-Wallis and Dunn-Bonferroni tests were used to compare the latent classes. Results: The LCA identified three classes: a group of women under follow-up, a group of men under follow-up, and a group under treatment. There were significantly more people who reported that they “could not ask the doctor questions” in the group under treatment than in the group of men under follow-up (P = .01), the latter of whom also had a higher tendency for neuroticism (P = .02). The male group undergoing follow-up care had significantly higher responses for “my doctor was easy to consult” (P < .001) and “I felt my doctor was knowledgeable and experienced” (P = .01) than the other groups, which confirmed their tendency to value smooth communication with their doctors. Conclusions: We revealed differences in trust tendencies and psychological characteristics of information sources among sex and treatment stage groups. These findings indicate that cancer survivors seek different types of support in regard to information gathering depending on their treatment status and sex.
Background: Cancer is one of the leading causes of death worldwide. Cancer mortality can be reduced by early detection via screening, diagnosis, and effective management. Risk assessment is a vital pa...
Background: Cancer is one of the leading causes of death worldwide. Cancer mortality can be reduced by early detection via screening, diagnosis, and effective management. Risk assessment is a vital part of the cancer screening process, especially for breast, cervical, and esophageal cancers, where early detection improves outcomes. Identifying high-risk individuals based on family history, genetics, lifestyle, and environment makes targeted and personalized screening possible, enhancing accuracy and resource efficiency. The inherent complexity of oncology data, which includes a wide array of clinical observations, laboratory results, radiology images, treatment regimens, and genetic information, poses significant challenges to data interoperability and exchange. Objective: We propose a Fast Healthcare Interoperability Resource (FHIR) standard-based Oncology Data Model (ODM) that enables the capturing, sharing, and processing of oncology data at various phases in cancer care across the health systems. We particularly focus on screening for five types of cancers, i.e., Breast, Cervical, Esophageal, Lung, and Oral, for risk assessment using the FHIR Questionnaire Resource for use in the Meghalaya FIRST Cancer Care (FCC) pilot project in India. Methods: ODM was developed based on the data collected during the cancer patient journey across five key phases: encounter, risk assessment, clinical investigation, treatment, and outcome. Essential oncology data elements were identified and modeled using HL7 FHIR R4 standards. Custom FHIR profiles were created for cancer-specific use cases, along with terminology mapping to standard coding systems such as SNOMED CT, LOINC, and ICD-10. The implementation guide was generated using FHIR Shorthand (FSH), SUSHI, and the HL7 IG Publisher. A demonstration application was also developed to support stakeholder training and facilitate adoption. Results: The data model was developed using HL7 FHIR to enhance interoperability across the cancer care continuum, from screening to treatment. The implementation resulted in the creation of a FHIR Implementation Guide featuring 25 oncology-specific resources and 50 standardized terminology value sets to support consistent and semantically accurate data exchange.
Central to the model were the FHIR Questionnaire and QuestionnaireResponse resources, which were customized to enable interoperable, structured data collection in both clinical and community-based digital health settings. These profiles were designed to support critical cancer screening and assessment workflows. The demonstration tool enabled hands-on exploration of the FHIR profiles and supported engagement with stakeholders. The comprehensive approach supports more integrated, data-driven oncology care within digital health systems. Conclusions: The development of standardized profiles for cancer screening and assessment is a transformative approach to achieving syntactic and semantic interoperability right from screening, diagnosis, to treatment and improving overall cancer care and health service delivery. This work explores the implementation of Questionnaire and Questionnaire Response Resources using digital health standards. When integrated with all the cancer patient journey stages, the approach can accelerate cancer care and support a more responsive and effective healthcare system
Background: Hypertension is associated with a high rate of disability and mortality, leads to a substantial social-economic burden. Moxibustion is an external treatment in traditional Chinese medicine...
Background: Hypertension is associated with a high rate of disability and mortality, leads to a substantial social-economic burden. Moxibustion is an external treatment in traditional Chinese medicine, which was used to treat mild to moderate hypertension in individuals with phlegm-dampness constitution, and had acupoint specificity. Objective: a standard large-scale randomized clinical trial to verify its effectiveness is still needed. This study is proposed to examine the clinical effectiveness and potential cardio-protective benefits of moxibustion at home as a treatment for individuals with phlegm-dampness hypertension. Methods: This study is a multi-center, randomized, controlled trial. A total of 120 patients with mild to moderate hypertension and phlegm-dampness constitution will be recruited and randomly assigned in a 1:1 ratio to the treatment group (acupoint: Zusanli, ST36) or the control group (acupoint: Xuanzhong, GB39). All patients will receive 12 weeks of treatment and 12-week follow-up period. Results: The primary outcome measure is the change in morning systolic blood pressure from baseline to week 12. The secondary outcome measures include blood pressure-related indicators (morning diastolic blood pressure, average systolic blood pressure, average diastolic blood pressure, nighttime systolic blood pressure, nighttime diastolic blood pressure, blood pressure circadian rhythm) and short-term blood pressure variability coefficient, all of which will be measured by 24-hour ambulatory blood pressure monitoring. Additionally, cardiac-related indicators measured by 24-hour Holter monitoring, metabolic disorder-related indicator, liver and kidney function indicators, transformed scores of the TCM phlegm-dampness constitution scale, and the Montreal Cognitive Assessment (MoCA) will also be evaluated. Conclusions: This multi-center, randomized, controlled clinical trial will provide evidence on the clinical treatment effectiveness and potential cardio-protective benefits of moxibustion at home as a treatment for individuals with phlegm-dampness type of hypertension. Clinical Trial: This study was registered on Chinese Clinical Trial Registrat ,registry name:Clinical efficacy of moxibustion at Zusanli(St36) in protection cardiovascular and cerebrovascular diseases on phlegm dampness type hypertension;Trial registration number:ChiCTR2400086582);Register date:July 5,2024;https://www.chictr.org.cn/showpro.ChiCTR2400086582
Background: Indigenous men in Australia face the highest rates of morbidity and mortality, coupled with the lowest use of healthcare services. Despite this, their attendance at Emergency Departments (...
Background: Indigenous men in Australia face the highest rates of morbidity and mortality, coupled with the lowest use of healthcare services. Despite this, their attendance at Emergency Departments (EDs) is double that of any other demographic group in the country. Additionally, Indigenous men are often discharged from EDs more rapidly than other groups, a pattern linked to adverse health outcomes and possibly reflecting dissatisfaction with the services on offer. Objective: Braun and Clarke's (2006) six stages of reflexive thematic analysis will guide the data analysis. The Health Citizenship Framework will facilitate the exploration of autonomy, participation, and respect in healthcare interactions. The SEWB Framework will ensure the research reflects a culturally grounded, holistic understanding of wellbeing, incorporating connections to family, culture, spirit, and community. Grounded in the Health Citizenship Framework and the Social and Emotional Wellbeing (SEWB) model, this study aims to explore the experiences of Indigenous men in EDS and identify culturally appropriate and responsive pathways to improve care and engagement. Methods: Semi-structured interviews will be conducted with 10 to 15 Indigenous men aged 18 years and older who have been referred to Alcohol and Other Drugs (AOD) services after presenting to the ED. Results: Preliminary analysis revealed five key themes influencing Indigenous men’s experiences in Emergency Departments (ED): cultural safety and stigma, disempowerment due to a lack of communication, limited access to Aboriginal liaison services, feelings of shame associated with alcohol use, and systemic barriers to follow-up care. Participants emphasised the importance of trust, culturally competent staff, and stronger referral pathways to Alcohol and Other Drugs (AOD) services and community support. Conclusions: This study highlights the urgent need to embed cultural safety principles and Indigenous-led care models within ED settings. Addressing communication gaps, providing consistent access to Aboriginal support services, and strengthening continuity of care are essential for improving health outcomes for Indigenous men. Finding offers practical guidance for developing more inclusive, respectful, and effective service delivery pathways across ED and AOD care settings.
Background: The enhancement of Primary care and the prevalence of chronic diseases are key issues worldwide, especially in Canada. As the incidence of chronic illnesses rises, they have emerged as the...
Background: The enhancement of Primary care and the prevalence of chronic diseases are key issues worldwide, especially in Canada. As the incidence of chronic illnesses rises, they have emerged as the foremost cause of mortality worldwide. This trend has led to a surge in demand for healthcare services, placing significant pressure on primary care systems. The evolving and multidimensional nature of the chronic disease situation creates challenges that can affect the quality of care offered to patients. The lack of communication directly affects relational continuity, i.e., the sharing of information from previous events and circumstances, to ensure that care is appropriate to the individual and his or her problem. Patients living with chronic disease may also perceive contradictory recommendations from different professionals, which undermines their potential for self-management. These challenges highlight the importance of establishing clear patient pathways within interprofessional teams, ensuring that information is shared efficiently, and that the continuity of care is coordinated effectively, especially in a telehealth context. In 2019, with the arrival of the pandemic, the demanded of telehealth emerged as a crucial resource for patients with chronic illnesses. This resource was implemented with no specific infrastructure, often without patient support, and left to the discretion of individual professionals. Interprofessional collaboration plays a critical role in the use of telehealth in managing chronic diseases. Despite its advantages, telehealth can have negative effects on interprofessional teamwork if used sub-optimally Objective: This study aims to understand the interprofessional collaboration (IPC) process as experienced by patients in a telehealth context within primary care, with a focus on patient engagement. More specifically, the study's objectives are: 1) to describe the IPC process in telehealth within primary care from the perspective of patients living with chronic conditions; 2) to identify, in collaboration with patients living with chronic disease, the barriers and facilitating factors of this process; 3) to understand the engagement of these patients in relation to the IPC process in a telehealth context. Methods: To describe the process of interprofessional collaboration in the telehealth context in primary care from the perspective of patients living with chronic disease, this qualitative research is based on a constructivist research methodology. The research team constructs knowledge derived from the interpretation of information that was obtained during the interviews with participants. To meet the study's objectives, a qualitative journey mapping data collection will be carried out, following the approach of Trebbel et al., (2010). Individual interviews will be analyzed iteratively. This method is useful for this research as it visually and collaboratively captures patients lived experiences. Results: Data collection was completed between May 2024 and November 2024. A total of 22 interviews were conducted. The project is currently in progress, with multiple papers being drafted for publication in peer reviewed journals. Conclusions: The results of this study will support and improve the interprofessional collaboration process in the telehealth context by providing concrete insights into patients’ experiences, identifying gaps and strengths in current collaborative practices, and offering evidence-based recommendations. Journey mapping will help identify potential facilitating factors for improving primary care in the telehealth context according to the patient's journey. Results will be used to build a practical guide (in phase 2) supporting interprofessional collaboration in the primary care telehealth context.
Background: Sepsis is a major global health concern, particularly given its high morbidity and mortality rates. Despite its clinical significance, the public awareness of sepsis remains limited. Objec...
Background: Sepsis is a major global health concern, particularly given its high morbidity and mortality rates. Despite its clinical significance, the public awareness of sepsis remains limited. Objective: Therefore, since short videos are increasingly becoming a vital medium for health education, we aimed to systematically assess sepsis-related short videos’ content quality, information coverage, and dissemination performance across major social media platforms, as well as to identify the key factors influencing their communication effectiveness and educational utility. Methods: This mixed-methods study integrated questionnaire data and video content analyses. The questionnaires were distributed among 200 participants to assess sepsis awareness and short video usage preferences. Meanwhile, 140 sepsis-related videos were collected from TikTok, Bilibili, and WeChat and evaluated using the Global Quality Score (GQS), the modified DISCERN tool, and a six-dimension content coverage framework. Finally, communication performance was assessed through user engagement metrics and other related indicators. Results: Compared to videos from the media or individual publishers, physician-produced videos had significantly higher GQS and DISCERN scores (p < 0.001). Additionally, the Intensive Care Unit (ICU) and chief physicians produced the highest-quality content. Furthermore, high-quality videos (GQS > 3, DISCERN > 3) correlated with greater content retention and diffusion. We also noted a mismatch between the content provided and public information needs. Specifically, practical topics such as symptoms and prevention were underrepresented. Additionally, although emotional elements and clickbait-style titles moderately enhanced engagement, they did not substitute for content quality. Moreover, various platform-specific benefits were identified including TikTok facilitating rapid exposure, Bilibili supporting structured learning, and WeChat enabling socially driven redistribution. Conclusions: Although short video platforms hold great promise for sepsis education, challenges of inconsistent quality, limited coverage, and misalignment with audience needs persist in current content. Therefore, enhancing professional accuracy, optimizing structural design, and tailoring strategies to platform characteristics would improve the educational impact of the videos, ultimately promoting early sepsis diagnosis and treatment.
Objective: To investigate the current status of the construction of artificial intelligence (AI) medical education platforms in medical schools and student feedback, and to understand the practical ne...
Objective: To investigate the current status of the construction of artificial intelligence (AI) medical education platforms in medical schools and student feedback, and to understand the practical needs of medical students at different stages and from different disciplines regarding AI-empowered medical education, in order to provide guidance for better construction of intelligent medical education platforms.
Methods: An anonymous self-administered online questionnaire was conducted, focusing on the current use of AI-assisted learning by medical students, feedback on the construction of intelligent medical education platforms by their respective schools, and expected functionalities. Statistical analysis was conducted using SPSS 27.0, with a significance level set at P=0.05 for all tests.
Results: A total of 428 valid questionnaires were collected. The average frequency of AI-assisted learning among medical students was (5.06±0.10) times per week. Over 80% of students used more than two AI tools in their daily study and work. The average satisfaction score with the intelligent education platforms at their schools was (72.23±21.84), with significant individual differences. Students from different disciplines, education stages, and academic systems exhibited different usage patterns and expectations for the platforms.
Conclusion: AI technology is widely accepted by medical students and is extensively applied. There are significant differences in usage patterns among students from different disciplines, education stages, and academic systems. Understanding the actual needs of students is crucial for the construction of intelligent medical education platforms.
Background: The integration of artificial intelligence (AI) into healthcare necessitates a paradigm shift in medical education. Preparing future physicians for AI-enhanced clinical environments requir...
Background: The integration of artificial intelligence (AI) into healthcare necessitates a paradigm shift in medical education. Preparing future physicians for AI-enhanced clinical environments requires a nuanced understanding of their readiness and the factors that influence it—particularly within digitally mediated learning contexts. Objective: To investigate the level of AI readiness and digital competencies among medical students and to identify key demographic and educational predictors of AI preparedness. Methods: A cross-sectional survey was conducted among 256 medical students at a single academic institution. Instruments included the validated Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) and a digital competency scale. Statistical analyses comprised descriptive metrics, correlation analyses, and multivariate linear regression modeling. Results: Students reported moderate AI readiness (MAIRS-MS mean = 45.66 ± 18.85). Regression analysis demonstrated that interest in AI was the strongest independent predictor of readiness (β = –17.31, p < 0.001), followed by gender (β = 4.27, p = 0.033) and age (β = 1.57, p = 0.044). Participation in seminars or coursework related to AI was not significantly associated with readiness. The model accounted for 31.4% of the variance (adjusted R² = 0.297). Conclusions: AI readiness in medical education emerges as a multidimensional construct shaped more by learner characteristics than by current educational exposures. These findings call into question the sufficiency of short-term curricular interventions and highlight the need for longitudinal, interest-responsive, and equity-focused AI training pathways. Embedding such frameworks early in medical training may be critical for cultivating a workforce capable of navigating an AI-driven future in medicine.
Background: Information gathering is the foundational skill of clinical reasoning. However, residents and attending physicians have no objective insights into the competencies of residents in developi...
Background: Information gathering is the foundational skill of clinical reasoning. However, residents and attending physicians have no objective insights into the competencies of residents in developing skills in information gathering in the electronic health record (EHR). The EHR audit logs, time stamped records of user activities, can provide a wealth of information about how residents gather information about patients at the time of admissions and throughout daily rounding. Objective: In this study, our goals were to: 1. Understand and delineate attending physician expectations of residents’ EHR-based information gathering activities at different stages of residency, 2. Develop a system, referred to as the Trainee Digital Growth Chart (a.k.a Growth Chart), using the EHR audit logs to audit and feed back information gathering performance to residents and their attending physicians, and 3. Pilot the Growth Chart among pediatric residents on pediatric hospital medicine (PHM) rotations to understand whether audit and feedback data on EHR-based information gathering is helpful in supporting resident learning and assessment. Methods: We convened a focus group of PHM attending physicians to establish information gathering benchmarks for residents at each stage of their training. Residents and attendings were involved in the co-design of an information gathering performance electronic dashboard called the Trainee Digital Growth Chart. This dashboard was piloted in an observational cohort study among PHM residents and attending physicians during the 2023-24 academic year. Results: Considerable variability was observed as focus group attendings established training-stage specific benchmarks. During the pilot, resident and attending logged into the Growth Chart to observe performance at moderate to high rates. However, despite their involvement in its co-design, most participants did not find great value in the Growth Chart. However, as an intervention, viewing prior Growth Chart information gathering performance had a positive impact on future information gathering performance among first year residents on daily rounds when that performance was also discussed with an attending physician. Conclusions: Information gathering is at the foundation of clinical reasoning. However, no competency-based benchmarks for information gathering in the EHR exist. Opportunities to leverage the EHR audit logs exist to feed back performance information to trainees, thereby influencing future information gathering behaviors. This is particularly powerful when done early in training before habits become formed, and when done in conjunction with verbal review with an attending physician. Such tools must find their way into routine clinical workflows and be capable of providing real time or near real time feedback before perceived educational value will be realized. Nevertheless, these approaches have broad potential to scale across specialties and allied health disciplines.
Background: The exponential growth of medical knowledge presents a paradox for modern medical education. While access to information is immediate, applying it in a clinically meaningful way remains a...
Background: The exponential growth of medical knowledge presents a paradox for modern medical education. While access to information is immediate, applying it in a clinically meaningful way remains a challenge. Large language models (LLMs), such as ChatGPT, are widely used for information retrieval, yet their role in dynamic, high-pressure clinical learning remains poorly understood. Objective: To evaluate whether access to a LLM improves decision-making, teamwork, and confidence in trauma education for medical students. Methods: This randomized controlled pilot study involved 40 final-year medical students participating in a trauma simulation session. Students self-selected into teams of 4–6 and were randomized to either an LLM-assisted group (ChatGPT-4o mini) or a control group without LLM access. All teams completed 18 video-based trauma scenarios requiring time-sensitive clinical decisions. Prompting was unrestricted. Confidence and trauma exposure were assessed using pre/post questionnaires. Facilitators rated teamwork (1–5), decision accuracy, and response times. Knowledge retention was measured four weeks later via an online quiz. Results: Confidence in trauma management improved in both groups (p < .001), with larger gains in the non-LLM group (p = .02). LLM support did not enhance decision accuracy or speed and was associated with longer response times in some complex cases. Teams without LLMs demonstrated more active discussion and scored higher in teamwork ratings (median 5.0 vs. 3.5; p = .033). Students primarily used the LLM for fact-checking but reported vague or overly general responses. Knowledge retention was high across both groups and did not differ significantly (p = .332). Conclusions: While students appreciated the inclusion of AI, unstructured LLM use did not improve performance and may have disrupted group reasoning. This pilot study highlights the need for structured AI integration and targeted instruction in AI literacy. Simulation-based trauma education proved effective and well received, but optimizing the educational value of LLMs will require thoughtful curricular design. Further studies with more students are needed to define best practices for LLM use in clinical education. Clinical Trial: https://doi.org/10.17605/OSF.IO/7HF3V
Background: Out-of-pocket costs pose a significant barrier to participating in cancer clinical trials (CCTs). Financial reimbursement programs (FRPs) that reduce the burden of out-of-pocket costs can...
Background: Out-of-pocket costs pose a significant barrier to participating in cancer clinical trials (CCTs). Financial reimbursement programs (FRPs) that reduce the burden of out-of-pocket costs can support participation in CCTs if the information is readily available to participants at the time of enrollment. Prior studies have shown the important and impact of FRPs, but despite improvements, significant barriers still remain. Objective: This study was designed to explore the feasibility and acceptability of automated texts designed to offer, screen, and enroll CCT participants in a FRP for out-of-pocket travel and lodging related clinical trial costs. Methods: This study employed a mixed methods approach. Eligible participants were those that consented to a breast, leukemia, or CAR-T trial at an NCI comprehensive cancer center.
Quantitative data was collected through engagement metrics, including text response rates and enrollment rates, as well as patient-reported satisfaction scores. Program enrollment rates were used to determine feasibility, whereas the engagement metrics were used to measure acceptability of the program. Semi-structured interviews were conducted with a subsample of patients who responded to at least one of the FRP texts and agreed to be interviewed to determine the barriers and facilitators of enrolling in the IMPACT program via text, perceived advantages and disadvantages of the text messaging program compared to a phone call, and overall feedback on the acceptability of the automated text messaging program. Results: Among the 77 patients consented to CCTs across the three trial teams, only 51 were referred to the IMPACT team (n=26 not referred for unknown reasons). Quantitative data including engagement with texts, FRP eligibility screening and enrollment rates were collected from all participants who successfully received a text (n=51) and qualitative data from a subsample of participants who agreed to participate in a semi-structured interview (n=28) about the text-based program. Participants’ mean age was 58 (s.d. 12), approximately 64% of participants were female, 21% of participants were Black, and 4% of participants were Hispanic or Latino.
There was high engagement with texts (96.1%), screening for FRP eligibility (51.0%), overall FRP enrollment rates (62.5%), and high satisfaction (Net Promoter Score=51). The text-based platform streamlined the enrollment process, allowing one-third of patients to complete enrollment independently, without assistance from the FRP coordinator. Reported facilitators for completion of the text conversation included support from the coordinator and introduction of the FRP by CCT teams. Barriers were a lack of communication from CCT teams, patient skepticism about legitimacy of the texts, and limited program information via text. Conclusions: Despite the small sample size and single study site, these findings suggest that automated text messaging can be an effective, low-cost and scalable strategy to increase awareness and streamline enrollment in FRPs.
Background: The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) is a federal nutrition assistance program for low-income, food-insecure mothers and young children in the...
Background: The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) is a federal nutrition assistance program for low-income, food-insecure mothers and young children in the United States. Despite its intended goals, many eligible individuals forgo WIC benefits, in part due to administrative burden – onerous experiences encountered when navigating public benefits programs. In response, a range of digital interventions and policy waivers were introduced during the COVID-19 pandemic, but their effectiveness in reducing access barriers remains unclear. Objective: This study examined the effectiveness of digital interventions for WIC by analyzing user reviews of WIC smartphone apps utilized by local agencies. Specifically, it investigated (a) how obstacles to WIC access manifested in daily interactions with these apps, (b) how user experiences changed after the onset of the COVID-19 pandemic, and (c) how these changes were associated with program satisfaction. Methods: An original dataset of user reviews (N = 28,212) was compiled for 26 WIC smartphone apps between 2013 and 2024. Structural topic modeling identified eight key themes in the reviews and assessed changes in user experiences following COVID-19. A mixed-effects analysis was conducted to examine the relationship between identified themes and app ratings. Results: The structural topic modeling showed that WIC apps were largely effective in reducing access barriers and improving participation. Pre-COVID-19 reviews most often cited frustrations such as account authentication issues, insufficient customer support, document upload difficulties, and unsuccessful troubleshooting after updates. Although some technical challenges persisted, post-COVID-19 reviews reflected greater appreciation for features that alleviated obstacles, including program tracking, shopping and benefit redemption, and ease of use. Mixed-effects analysis indicated that topics more prominent in post-COVID-19 reviews were significantly associated with increased satisfaction: topics related to program tracking (B = 0.20, SE = 0.06, P = .001), shopping and redemption features (B = 0.18, SE = 0.07, P = .01), and ease of use (B = 0.10, SE = 0.05, P = .04) predicted higher app ratings. In contrast, topics reflecting administrative burden and access obstacles prior to COVID-19 were not significantly associated with app ratings. Conclusions: User-centered digital interventions can improve WIC access and participation by reducing administrative burdens and enhancing service delivery.
Background: LGBTQIA+ researchers and participants frequently encounter hostility in online environments, particularly on social media platforms where public commentary on research advertisements can f...
Background: LGBTQIA+ researchers and participants frequently encounter hostility in online environments, particularly on social media platforms where public commentary on research advertisements can foster stigmatization. Despite a growing body of work on researcher harassment, little empirical research has examined the actual content and emotional tone of public responses to LGBTQIA+-focused research recruitment. Objective: This study aimed to analyze the thematic patterns and sentiment of social media comments directed at LGBTQIA+ research recruitment advertisements, in order to better understand how online stigma is communicated and how it may impact both researchers and potential participants. Methods: A total of 994 publicly visible Facebook comments posted in response to LGBTQIA+ recruitment ads (January–May 2024) were collected and analyzed. Text preprocessing included tokenization, stop-word removal, and lemmatization. Latent Dirichlet Allocation (LDA) was used to identify latent themes across the dataset. Sentiment analysis was conducted using the Bing Liu and NRC lexicons, with scores ranging from -1 (most negative) to 1 (most positive). Linguistic Inquiry and Word Count (LIWC) was employed to quantify psychological and moral language features. Comments were also manually coded into four audience target groups (researchers, LGBTQIA+ community, general public, other commenters), and language category differences were analyzed using one-way ANOVAs with Bonferroni corrections. Results: Topic modeling identified three key themes: (1) “Transitions, Health, and Gender Dysphoria,” (2) “Negative and Confrontational Language,” and (3) “Religious and Ideological Debates.” Topic 2 had the highest average prevalence (γ = 0.486). Sentiment analysis revealed negative mean sentiment scores for all three topics: Topic 1 (-0.41), Topic 2 (-0.21), and Topic 3 (-0.35). No topic exhibited a statistically significant predominance of positive sentiment. A one-way ANOVA showed significant differences in linguistic tone across target groups: negative tone (F(3, 990) = 12.84, P < .001), swearing (F(3, 990) = 16.07, P < .001), and anger-related language (F(3, 990) = 9.45, P < .001), with the highest levels found in comments directed at researchers. Comments targeting LGBTQIA+ individuals showed higher references to mental illness, morality, and threats to children. While a small proportion of comments offered affirming responses, they were embedded in adversarial exchanges and did not offset the broader negativity. Conclusions: This study documents a persistently hostile online environment for LGBTQIA+ research, where researchers are frequently dehumanized and LGBTQIA+ identities are pathologized. These findings reinforce stigma communication models and suggest a need for institutional responses that include mental health support, enhanced moderation tools, and policy advocacy. Future research should investigate how hostile discourse affects researcher well-being and recruitment outcomes, and evaluate interventions to foster more respectful engagement with LGBTQIA+ studies.
Background: Endometriosis is a gynecological condition that involves the implantation of endometrial tissue outside the uterine cavity. About 1.2 million women are suffering from this disease in Bangl...
Background: Endometriosis is a gynecological condition that involves the implantation of endometrial tissue outside the uterine cavity. About 1.2 million women are suffering from this disease in Bangladesh Objective: The purpose of this study was to explore the factors associated with endometriosis, symptoms, and clinical treatment in Bangladesh. Methods: In this case-control study, out of 162 women, 82 had endometriosis confirmed with laparoscopy/ transvaginal ultrasound and 80 were included in the control group with normal pelvic ultrasound. All were asked to fill out a questionnaire containing demographics, reproductive, and menstrual status. Comparisons between the two groups were done using an Independent t-test, Chi-square test and logistic regression model. P-value < .05 was considered statistically significant. Results: The prevalence of endometriosis was higher with, age 25.78 ± 5.36 (P = .01), marital status, and BMI (P < .05). The most common symptoms were dysmenorrhea, excessive bleeding, cramping etc. Infertility (OR:2.21; %95CI: 1.07–4.53; P = .03), thyroid imbalance (OR:3.44; %95CI: 1.47–8.03; P = .004), irregular menstruation (OR:5.76; %95CI: 2.12–15.60; P = .001), age at menarche (OR:2.54; %95CI: 1.04–6.21; P = .04) were the factor that associated with endometriosis. Endometriosis was diagnosed most frequently by TVS (transvaginal ultrasound) at 45.1%, NSAI (nonsteroidal aromatase inhibitors) at 22.8% was the most commonly utilized medicine, and 17.9% of patients undergo laparoscopy for surgical treatment Conclusions: This study identified several factors significantly associated with endometriosis among infertile women in Bangladesh. The prevalence of endometriosis was notably higher among women with increased age, specific marital statuses, and elevated BMI. Common symptoms included dysmenorrhea, excessive bleeding, and cramping. Key associated factors were infertility, thyroid imbalance, irregular menstruation and early age at menarche. Transvaginal ultrasound emerged as the most frequently used diagnostic method, while nonsteroidal aromatase inhibitors were the most common form of medical treatment. These findings emphasize the importance of early screening and targeted interventions, as well as the need to enhance clinical awareness and access to care for women at risk of endometriosis.
Background: Background: Traumatic brain injuries (TBIs) caused by gunshot wounds present complex clinical challenges with high mortality and disability rates. Early and structured rehabilitation may e...
Background: Background: Traumatic brain injuries (TBIs) caused by gunshot wounds present complex clinical challenges with high mortality and disability rates. Early and structured rehabilitation may enhance recovery, especially in resource-limited settings like Bangladesh. Objective: In this case study, the rehabilitation care of a 16-year-old boy who was shot in the head during Bangladesh's 2024 anti-discrimination movement is detailed. The emphasis is on improving functional outcomes by early mobilisation and an organised physiotherapy program. Methods: Case Presentation: This report describes a 16-year-old boy who sustained a penetrating brain injury during a peaceful student protest in the 2024 anti-discrimination movement in Bangladesh. Following emergency neurosurgical intervention, he presented with severe neurological impairments, including right-sided hemiplegia, left lower limb paresis, spasticity, and postural instability. Intervention: The patient underwent a three-month multidisciplinary rehabilitation program comprising 36 sessions. Interventions included early mobilisation, balance and vestibular training, neuromuscular electrical stimulation, manual therapy, and task-specific functional training. Rehabilitation followed post-concussion guidelines and was personalised based on symptom progression and functional response Results: Outcomes: By the end of the rehabilitation program, pain levels decreased from 6–7/10 to 0/10. The patient’s Sitting Balance Scale score improved from 0/44 to 31/44. He progressed from being wheelchair-bound to walking with moderate assistance, with enhanced trunk control, postural balance, and mobility. Conclusions: Conclusion: This case highlights the potential for significant recovery through early, individualised rehabilitation following severe gunshot-related TBI. It underscores the importance of integrating structured neurorehabilitation into trauma care, particularly in low-resource environments affected by sociopolitical unrest.
Background: Research capacity building (RCB) among healthcare professionals remains limited, particularly for those working outside academic institutions. Japan experiences a decline in original clini...
Background: Research capacity building (RCB) among healthcare professionals remains limited, particularly for those working outside academic institutions. Japan experiences a decline in original clinical research due to insufficient RCB infrastructure. Our previous hospital-based workshops showed effectiveness but faced geographical and sustainability constraints. We developed a fully online Scientific Research WorkS Peer Support Group (SRWS-PSG) model that eliminates geographical and time-bound constraints and establishes a sustainable economic model. Mentees use online materials, receive support from mentors via a communication platform after formulating their research question, and transition into mentors upon publication. Objective: We evaluated whether our model's theoretical benefits translated into actual program effectiveness in RCB among healthcare professionals. Methods: We conducted a retrospective cohort study of healthcare professionals who participated in the SRWS-PSG program between September 2019 and January 2025. Mentees received online mentoring for their systematic review projects. We evaluated time from mentee enrollment to manuscript submission, program continuation, and mentor response time. We collected data from online chat logs between mentees and mentors, and self-reported manuscript submission status, and analyzed data using descriptive statistics. Results: Of 85 mentees analyzed, 31 (36.5%) held academic degrees (PhD or MPH), and 68 (80.0%) were medical doctors. During a median follow-up of 10 months, 51 (60.0%) submitted manuscripts, and 46 (90.0%) became mentors. Ten mentees (12%) discontinued the program. Of 51 submitted manuscripts, 50 were published in English peer-reviewed journals. Median mentor response time was 0.8 hours, with 90% responding within 24 hours. Conclusions: The SRWS-PSG model effectively developed research capabilities among healthcare professionals. This fully online RCB program eliminates geographical barriers and provides an adaptable approach for research capacity development across diverse healthcare contexts. Clinical Trial: not applicable
The Safety Planning Intervention (SPI) produces a plan to help manage patients’ suicide risk. High-quality safety plans – that is, those with greater fidelity to the original program model – are...
The Safety Planning Intervention (SPI) produces a plan to help manage patients’ suicide risk. High-quality safety plans – that is, those with greater fidelity to the original program model – are more effective in reducing suicide risk. We developed the Safety Planning Intervention Fidelity Rater (SPIFR), an automated tool that assesses the quality of SPI using three large language models (LLMs)—GPT-4, LLaMA 3, and o3-mini. Using 266 deidentified SPI from outpatient mental health settings in New York, LLMs analyzed four key steps: warning signs, internal coping strategies, making environments safe, and reasons for living. We compared the predictive performance of the three LLMs, optimizing scoring systems, prompts, and parameters. Results showed that LLaMA 3 and o3-mini outperformed GPT-4, with different step-specific scoring systems recommended based on weighted F1-scores. These findings highlight LLMs’ potential to provide clinicians with timely and accurate feedback on SPI practices, enhancing this evidence-based suicide prevention strategy.
Background: Functional rehabilitation is commonly used for patients with chronic ankle instability (CAI). Digital training systems have become increasingly popular in postoperative rehabilitation; how...
Background: Functional rehabilitation is commonly used for patients with chronic ankle instability (CAI). Digital training systems have become increasingly popular in postoperative rehabilitation; however, their effectiveness for CAI patients after modified Brostrom surgery is uncertain. Objective: This trial aimed to evaluate whether individually tailored physiotherapeutic ankle-specific training (PAST) delivered via a digital training system is noninferior to conventional face-to-face physiotherapy for CAI patients following modified Brostrom surgery in ChinA two-arm, single-assessor blinded, randomized controlled trial was conducted at Huashan Hospital from January 2022 to January 2024, enrolling 84 patients. Participants were randomly allocated to either the digital training system group (DT group, n=42), receiving a 12-week individualized PAST program via digital system, or the conventional face-to-face training group (PT group, n=42), undergoing standard physiotherapy for 12 weeks. Assessments occurred at baseline, 12 weeks, and 24 weeks postoperatively. Primary outcomes were two subscales of the Foot and Ankle Ability Measure (FAAM). Secondary outcomes included balance tests (Time-in-Balance Test, Foot-Lift Test, Star Excursion Balance Test), functional tests (ankle dorsiflexion range of motion, Side-Hop Test, Figure-8 Hop Test), and quality of life assessed by the FAAM scale. Statistical analyses included inferential statistics and bootstrapping for incremental cost-effectiveness ratio (ICER).a. Methods: A two-arm, single-assessor blinded, randomized controlled trial was conducted at Huashan Hospital from January 2022 to January 2024, enrolling 84 patients. Participants were randomly allocated to either the digital training system group (DT group, n=42), receiving a 12-week individualized PAST program via digital system, or the conventional face-to-face training group (PT group, n=42), undergoing standard physiotherapy for 12 weeks. Assessments occurred at baseline, 12 weeks, and 24 weeks postoperatively. Primary outcomes were two subscales of the Foot and Ankle Ability Measure (FAAM). Secondary outcomes included balance tests (Time-in-Balance Test, Foot-Lift Test, Star Excursion Balance Test), functional tests (ankle dorsiflexion range of motion, Side-Hop Test, Figure-8 Hop Test), and quality of life assessed by the FAAM scale. Statistical analyses included inferential statistics and bootstrapping for incremental cost-effectiveness ratio (ICER). Results: Baseline demographic and clinical characteristics were similar between groups, except for the Foot-Lift Test. At the 24-week follow-up, the between-group differences for FAAM improvements, adjusted for baseline values, indicated noninferiority with near-zero differences: FAAM-activities of daily living (FAAM-ADL), 0.36 (95% CI: -1.01 to 1.72); FAAM-sport (FAAM-S), 1.67 (95% CI: -0.61 to 3.96). Secondary outcome measures (Time-in-Balance Test, ankle dorsiflexion range of motion, Side-Hop Test) also showed no significant differences. The average intervention costs per patient were lower in the DT group (53,551.36 CNY) compared to the PT group (59,372.04 CNY), with incremental costs of -14,450.57 CNY, leading to ICER values of -16,396.25 for FAAM-ADL and -114,130.78 for FAAM-S.
Conclusions: Individually tailored PAST delivered via a digital training system is noninferior and more cost-effective compared to conventional face-to-face training, supporting its use as a reliable rehabilitation alternative for CAI patients following modified Brostrom surgery. Conclusions: Individually tailored PAST delivered via a digital training system is noninferior and more cost-effective compared to conventional face-to-face training, supporting its use as a reliable rehabilitation alternative for CAI patients following modified Brostrom surgery. Clinical Trial: Chinese Clinical Trial Registry (Number: ChiCTR2300075292)
Background: Acute care utilization (ACU) represents a major economic burden in oncology, which can ideally be prevented. Existing models effectively predict such events. Objective: We aim to quantify...
Background: Acute care utilization (ACU) represents a major economic burden in oncology, which can ideally be prevented. Existing models effectively predict such events. Objective: We aim to quantify the cost savings achieved by implementing a model to predict ACU in oncology patients undergoing systemic therapy. Methods: This retrospective cohort study analyzed cancer patients at an academic medical center from 2010 to 2022. We included patients and counted ACU events starting from the first day following the initiation of systemic therapy, excluding those with known death dates within study period. Data on ACU-related expenses were gathered from Medicare claims and mapped to service codes in electronic health records, yielding average daily costs for each patient over 180 days following the start of therapy. The exposure was an ACU event. Results: The main outcome was the average daily cost per patient and the total cost per patient at the end of the first 180 days of systemic therapy. We see that expense accumulation flattens earlier and more rapidly for Non-ACU patients. The study included 20,556 patients. The average daily cost per patient with ACU was US $94.62 (92.32, 96.92) (95% CI) and US $53.28 (52.37, 54.19) without ACU. The average total cost per ACU patient was US $17,031.92 (16,616.74, 17,445.09), and US $9,591.06 (9,427.64, 9,754.48) for those without an ACU. Based on the average number of patients in systemic therapy annually, 2,177, the model predicts a savings of US $910,000 in the first year, growing to US $9.46 million over six years, with cumulative savings of US $31.11 million. Conclusions: Predictive analytics can significantly reduce costs associated with ACU events, enhancing economic efficiency in cancer care. Further research is needed to explore potential health benefits.
Background: Background: Generative Artificial Intelligence (Gen AI) has shown great potential in various fields, including healthcare. However, its application in developing health education materials...
Background: Background: Generative Artificial Intelligence (Gen AI) has shown great potential in various fields, including healthcare. However, its application in developing health education materials for patients, particularly those with coronary heart disease (CHD), remains underexplored. Traditional methods for creating these materials are time-consuming and lack personalization, which limits their effectiveness. Objective: Objective: This study aims to explore the effectiveness of Gen AI tools(ChatGPT and DeepSeek) in generating health education materials for CHD patients and to compare them with materials developed by a professional medical team. Methods: Methods: In February 2025, health education materials for CHD patients were developed using a framework designed by a professional medical team. Structured prompts were used to generate materials through two Gen AI models—ChatGPT-4o and DeepSeek R1. These AI-generated materials were compared with those created by the medical team in terms of development time, readability, understandability, actionability, and accuracy. Results: Results: The manual development time for the materials was 14 hours, while the time for ChatGPT-4o was 0.62 hours, and for DeepSeek R1, it was 0.78 hours. There were no statistically significant differences between the three groups in terms difficult words(P =.875), simple sentences(P =.082), or the number of personal pronouns (P =.550).However, a statistically significant difference was found between manual and ChatGPT-4o materials in content word frequency (P < .027). All three groups had similar readability levels, with elementary-level simple sentence ratios and personal pronoun counts but high school-level difficulty words and content word frequency. The understandability and actionability scores did not differ significantly.In terms of accuracy, there was a statistically significant difference between groups (P < .026), but multiple comparisons did not reveal significant differences(P =.065). Four out of eight experts noted accuracy issues in the Gen AI-generated materials. Conclusions: Conclusions: Gen AI significantly improved the efficiency of developing health education materials for CHD patients. The materials generated by ChatGPT-4o and DeepSeek R1 were comparable to the professionally written ones in terms of readability, understandability, and actionability. However, improvements in reducing difficult words and increasing content word frequency are needed to enhance readability. The accuracy of Gen AI-generated materials still poses concerns, including potential AI "hallucinations," and requires review by healthcare professionals. Gen AI holds considerable potential in generating health education materials, and future research should assess its applicability and effectiveness in real-world patient and family contexts. Clinical Trial: This study did not involve direct participation of patients, and patient-related information or data were not included. So Clinical trial number is not applicable.
Background: Traditional cancer registries, limited by labor-intensive manual data abstraction and rigid, predefined schemas, often hinder timely and comprehensive oncology research. While Large Langua...
Background: Traditional cancer registries, limited by labor-intensive manual data abstraction and rigid, predefined schemas, often hinder timely and comprehensive oncology research. While Large Language Models (LLMs) have shown promise in automating data extraction, their potential to perform direct, just-in-time (JIT) analysis on unstructured clinical narratives – potentially bypassing intermediate structured databases for many analytical tasks – remains largely unexplored. Objective: This study aimed to evaluate whether a state-of-the-art LLM (Gemini 2.5 Pro) can enable a JIT clinical oncology analysis paradigm by: 1) performing high-fidelity multiparameter data extraction, 2) answering complex clinical queries directly from raw text, 3) automating multi-step survival analyses including executable code generation, and 4) generating novel, clinically plausible hypotheses from free-text documentation. Methods: A synthetic dataset of 240 unstructured medical reports from stage IV non-small cell lung cancer (NSCLC) patients, embedding 14 predefined clinical variables, was used. Gemini 2.5 Pro was assessed on the four core JIT capabilities. Performance was measured by: extraction accuracy (compared to human annotation on n=40 reports and across the full n=240 dataset), numerical deviation for direct question answering (n=40 to 240 letters, 5 questions), log-rank concordance for LLM-generated vs. ground-truth Kaplan-Meier survival analyses (OS and PFS from n=80 and n=160 reports), and clinical plausibility of LLM-generated hypotheses from the full dataset (n=240 reports). Results: For multiparameter extraction from n=40 reports, the LLM achieved >99% average accuracy, comparable to a human annotator (Friedman test, p=0.139), but in significantly less time (LLM: 3.7 minutes vs. Human: 133.8 minutes). Across the full 240-report dataset, LLM multiparameter extraction maintained >98% accuracy for most variables. The LLM answered multi-conditional clinical queries directly from raw text with a relative deviation typically below 1% and rarely exceeding 1.5%, even with up to 240 letters. Crucially, it autonomously performed end-to-end survival analysis, generating text-to-R-code that produced Kaplan-Meier curves statistically indistinguishable from ground truth for OS (log-rank p=0.99) and PFS (log-rank p=0.89). Subgroup PFS analysis (driver mutation vs. wild type, n=160) was also accurately replicated (log-rank p < 0.0001), with comparable median PFS (e.g., Driver: LLM 26.0 vs. Ground Truth 28.0 months). Furthermore, the LLM generated clinically plausible hypotheses regarding biomarker–outcome associations and toxicities without specific prompting. Conclusions: LLMs can enable a paradigm shift towards dynamic, just-in-time clinical analysis and knowledge discovery directly from narrative data, offering a powerful alternative or complement to traditional registry architectures for many research and analytical needs. This suggests a future of AI-assisted, “living” oncology ecosystems capable of supporting timely, scalable, and hypothesis-driven research. Rigorous validation on real-world, multi-institutional datasets, with careful attention to ethics and data privacy, is essential before clinical implementation.
Background: Digital therapeutics have shown increasing potential in the management of prediabetes, offering a viable alternative to traditional interventions due to their accessibility and personalize...
Background: Digital therapeutics have shown increasing potential in the management of prediabetes, offering a viable alternative to traditional interventions due to their accessibility and personalized nature. However, the effectiveness of such interventions largely depends on the theoretical underpinnings of behavioral science and the successful integration of these theories into digital platforms. There is a lack of comprehensive reviews evaluating the systematic application, intervention pathways, and practical outcomes of behavioral science within digital therapeutics for prediabetes. Objective: This scoping review examines the application of behavioral science in digital therapeutics targeting individuals with prediabetes. The goal is to inform the development of theoretically grounded, technologically adaptable, and scalable precision intervention strategies. Methods: A systematic search was conducted across PubMed, Embase, Web of Science, the Cochrane Library, Scopus, China National Knowledge Infrastructure (CNKI), VIP Database, and the Chinese Biomedical Literature Database. The search covered studies published up to March 10, 2025. Eligible studies were screened, selected, and synthesized narratively. Results: A total of 21 studies were included. Frequently adopted behavioral science theories included Social Cognitive Theory, the Theory of Planned Behavior, and the Transtheoretical Model—notably, 11 studies employed theory-informed behavior change techniques without explicitly specifying their theoretical frameworks. Digital therapeutic modalities encompass smartphone applications, communication tools, web-based platforms, app-integrated wearable devices, and guidance from health coaches. Intervention components involved goal setting, self-monitoring, real-time feedback and reinforcement, social support and peer interaction, reminders and prompts, and health education. The most commonly utilized behavior change techniques included self-monitoring of behavior, instruction on how to perform the behavior, goal setting (behavior-specific), information about health consequences, and social support (unspecified). Outcome measures assessed glycemic control, metabolic and body composition indicators, cardiovascular risk, physiological functions, behavioral and cognitive outcomes, and overall health outcomes. Conclusions: Behavioral science demonstrates significant potential in enhancing digital therapeutics for individuals with prediabetes. However, current studies face multiple challenges in practical implementation. Future research should prioritize high-quality, large-scale, multicenter randomized controlled trials to establish precise intervention models, thereby enhancing the effectiveness of digital management strategies for individuals with prediabetes.
Background: Mild cognitive impairment (MCI) is a prevalent condition among older adults, often progressing to dementia and imposing significant burdens on healthcare systems and informal caregivers. D...
Background: Mild cognitive impairment (MCI) is a prevalent condition among older adults, often progressing to dementia and imposing significant burdens on healthcare systems and informal caregivers. Digital health interventions, such as the Support, Monitoring and Reminder Technology for Mild Dementia (SMART4MD) tablet application, have been proposed to support people living with MCI (PwMCI) and their caregivers by facilitating daily routines and improving quality of life (QoL). However, evidence regarding their long-term cost-effectiveness remains limited. Objective: This study aimed to evaluate the 18-month cost-effectiveness of the SMART4MD tablet-based intervention, in addition to standard care, compared to standard care alone for PwMCI and their informal caregivers, from the perspective of healthcare providers in Sweden and Spain. Methods: A pragmatic, multicenter randomized controlled trial was conducted between December 2017 and September 2020 across sites in Sweden and Spain. Dyads consisting of PwMCI and their informal caregivers were randomized to receive either the SMART4MD intervention plus standard care or standard care alone. The primary outcome was health-related quality of life, measured by quality-adjusted life years (QALYs) derived from the EQ-5D-3L instrument. Secondary outcomes included disease-specific QoL (QoL-AD), cognitive function (MMSE), and caregiver burden (Zarit Burden Interview, ZBI). Cost data were collected from healthcare provider registries, and economic evaluation followed the CHEERS guidelines. Incremental cost-effectiveness ratios (ICERs) and net monetary benefit (NMB) were calculated, with sensitivity and subgroup analyses performed to assess the uncertainties. Results: A total of 345 dyads were included in the Swedish cost-effectiveness analysis. After 18 months, there were no statistically significant differences in total costs or QALYs between the intervention and control groups for PwMCI, informal caregivers, or dyads. For PwMCI, the intervention was associated with slightly higher costs (€9) and lower QALYs (–0.015) compared to standard care, resulting in the intervention being dominated by standard care (negative NMB). For informal caregivers, the intervention group showed a small, non-significant QALY gain (0.006) at higher cost (€468), with an ICER above the Swedish willingness-to-pay threshold, indicating the intervention was not cost-effective. Scenario analysis in the Spanish site showed the intervention could be cost-effective for PwMCI (ICER €3,337/QALY), but differences were not statistically significant. Notably, the intervention group showed a statistically significant improvement in MMSE scores, but no significant differences in other outcomes. Conclusions: Over 18 months, the SMART4MD intervention did not result in significant improvements in quality of life for PwMCI or their informal caregivers compared to standard care. The intervention was not cost-effective from a healthcare provider perspective, except in a scenario analysis for one Spanish site. Further research with larger sample sizes, longer follow-up, and strategies to enhance engagement and minimize dropout is warranted to clarify the potential of digital interventions in this population. Clinical Trial: ClinicalTrials.gov: NCT03325699
The Kumbh Mela, recognized as one of the largest human gatherings in the world, presents unique and complex challenges from an engineering and infrastructure perspective. The dynamic nature of the eve...
The Kumbh Mela, recognized as one of the largest human gatherings in the world, presents unique and complex challenges from an engineering and infrastructure perspective. The dynamic nature of the event, characterized by its massive scale and dense crowds, necessitates multifaceted technological and engineering solutions. This article explores engineering strategies to address critical issues such as crowd control, waste management, resource allocation, emergency response, medical assistance, disaster preparedness, security, and water monitoring. Emphasis is placed on the integration of emerging technologies—including Artificial Intelligence (AI), Internet of Things (IoT), and Augmented Reality (AR)—to optimize event management. A centralized command system is proposed to harmonize these technologies, enabling real-time monitoring and adaptive decision-making. The study highlights the importance of a layered, adaptive approach where technology is not just applied uniformly, but strategically implemented based on prior gaps and evolving requirements. These engineering solutions aim to enhance safety, sustainability, and overall efficiency in managing the complex ecosystem of the Kumbh Mela.
Background: Falls are the primary cause of fatal and non-fatal accidental injuries in older adults. The World Falls Prevention Guidelines recommend balance-challenging, functional exercise programmes...
Background: Falls are the primary cause of fatal and non-fatal accidental injuries in older adults. The World Falls Prevention Guidelines recommend balance-challenging, functional exercise programmes as a key strategy for falls prevention but access, uptake and adherence to these programmes in community settings remain suboptimal. Keep-On-Keep-Up (KOKU), a digital, National Health Service (NHS) approved programme was co-developed with older adults and therapists, to provide progressive, evidence-based exercises and to raise awareness of fall prevention strategies. Objective: This trial aims to investigate the effectiveness and cost-effectiveness of the KOKU digital strength and balance programme for improving balance, enhancing physical function and reducing falls risk among community dwelling older adults. Methods: This is a two-arm, parallel group randomised controlled trial. A total of 196 community dwelling older adults aged 60 years and older will be randomised to either the intervention group comprising a digital strength and balance programme (KOKU) alongside standard care (strength and balance exercise advice and a falls prevention leaflet) or to a control group, receiving standard care only. Participants receiving the intervention will be asked to exercise three times per week following the tailored and progressive programme. Randomisation will take place after recruitment and baseline data collection. The trial’s primary outcome measure is balance function (Berg Balance Score) at twelve weeks post-randomisation. Secondary trial outcomes include: lower limb strength; healthcare utilisation and health-related quality of life; self-reported concerns about falling; self-reported physical activity; falls risk, pain, mood, fatigue, self-reported falls, acceptability and usability of the KOKU programme. Intention to treat analysis and a cost-effectiveness analysis will be employed for trial data analysis. Qualitative interviews and focus groups will be undertaken with around 10 care providers and 13 participants to further understand views of the intervention and trial processes. Results: This study began recruitment in July 2024 and concluded in March 2024 recruiting a total of 202 participants (102 intervention and 100 control). Following protocol publication, data compilation and analysis will be conducted, with results anticipated to be published in 2027. Conclusions: This trial will provide important evidence on whether a digital strength and balance programme can improve balance and related outcomes in older adults compared to usual care. Clinical Trial: ClinicalTrials.gov: NCT06687135
Background: Postoperative delirium (POD) in elderly hip fracture patients is associated with high morbidity and severe adverse outcomes, yet its pathogenesis remains unclear. Objective: This study aim...
Background: Postoperative delirium (POD) in elderly hip fracture patients is associated with high morbidity and severe adverse outcomes, yet its pathogenesis remains unclear. Objective: This study aimed to develop a predictive model for POD following hemiarthroplasty in elderly patients by integrating high-throughput targeted metabolomics and machine learning, enabling early identification and intervention for high-risk individuals to improve postoperative recovery. Methods: In this prospective, observational, multi-center cohort study, 245 elderly patients undergoing hemiarthroplasty for hip fractures were enrolled. Perioperative cognitive assessments and clinical data were collected, with preoperative blood samples analyzed via high-throughput targeted metabolomics. Machine learning algorithms were employed to identify metabolomics signatures associated with POD. Differential metabolites were screened using Random Forest (RF) and Lasso regression (Least Absolute Shrinkage and Selection Operator). Predictive models were constructed using Gradient Boosting, Logistic Regression, and Random Forest. Model performance was evaluated by Receiver Operating Characteristic (ROC) curves and area under the curve (AUC). Results: Absolute quantification of 201 metabolites revealed 41 significantly differentially expressed metabolites between POD and non-POD groups (P < 0.05). RF and Lasso regression identified 16 candidate biomarkers for model construction. The Logistic Regression model demonstrated optimal performance, achieving an AUC of 0.855 (95% CI: 0.8–0.91) in the overall cohort. Upon 7:3 random partitioning into training and test sets, the model maintained robust predictive accuracy with AUCs of 0.844 and 0.856. Conclusions: Integration of preoperative metabolomics profiling and machine learning enables accurate preoperative or early postoperative prediction of POD in elderly hip fracture patients. This approach facilitates personalized risk stratification and tailored clinical management, potentially reducing complications and enhancing recovery outcomes. The model highlights the translational potential of metabolomics biomarkers combined with artificial intelligence for precision medicine in geriatric perioperative care. Clinical Trial: Chinese Clinical Trial Registry ChiCTR-CPC-15006141; https://www.chictr.org.cn/ indexEN.html
Background: Cataract eye surgery is the most frequently performed surgery worldwide, crucial for restoring sight in millions. The COVID-19 pandemic and an aging population have increased barriers to t...
Background: Cataract eye surgery is the most frequently performed surgery worldwide, crucial for restoring sight in millions. The COVID-19 pandemic and an aging population have increased barriers to timely surgery. Missed preoperative instructions and poor adherence to postoperative care contribute to surgery cancellations, delays, and potential complications, adversely affecting health care efficiency and patient outcomes. Mobile digital health interventions could enhance adherence and reduce cancellations. Objective: The intent of this study was to assess the effectiveness of the Sharp Health Companion smartphone app, built using the CareKit health platform. The study aimed to compare this digital intervention with traditional printed instructions to determine its impact on medication adherence after cataract eye surgery, surgery cancellations and delays, vision outcomes, and the overall patient experience for older adults undergoing the full cataract surgery process. Methods: In this randomized controlled trial, 200 patients aged 39–86 years (mean 70 years) were enrolled from a high-volume ophthalmology practice between December 2022 and January 2024. Participants were randomly assigned to Group 1 (printed instructions with phone reminders) or Group 2 (Sharp Health Companion app). Each participant underwent their first cataract surgery on the eye with the most severe cataract. Both groups received identical perioperative care instructions and post-surgery eye medications. Data collected included patient demographics, preoperative and postoperative visual acuity, medication adherence (self-reported checklists and objective bottle weight measurements), surgery cancellations and delays, and patient satisfaction at pre-surgery, 1-day post-surgery, and 1-month post-surgery intervals. Statistical analyses included independent t-tests and chi-square tests, with significance set at P<.05. Results: Surgery completion rates were similar between the printed instruction Group 1 and the Sharp Health Companion App Group 2, indicating both methods effectively supported perioperative preparation for cataract surgery. Participants using the Sharp Health Companion app experienced significantly fewer same-day surgery delays (1.0%) compared to those using printed instructions (9.6%; P=.02), while surgery cancellation rates were similar between groups (P=.48). Both groups reported high preparedness and satisfaction, with no significant differences in preparedness (P=.39). Self-reported postoperative medication adherence was higher in Group 1 (97%) than in Group 2 (74.95%), though objective measures using eye drop medication bottle weights showed better antibiotic adherence in Group 2 (P<.05). Both groups experienced improved visual acuity and low complication rates after surgery. Conclusions: The Sharp Health Companion app effectively supported perioperative care by significantly reducing same-day surgery delays and objectively improving medication adherence among older adults undergoing cataract eye surgery. These findings highlight the potential for mobile health interventions to enhance health care delivery, operational efficiency, and patient outcomes, even among traditionally less technology-oriented populations. Future research should explore broader applications of these digital tools to improve outcomes and accessibility in other surgical disciplines. Clinical Trial: ClinicalTrials.gov NCT07028359; retrospectively registered.
In this study, we develop a content-based medical image retrieval (CBMIR) system meticulously designed to cater to seven distinct types of brain tumors as seen in magnetic resonance brain images. Our...
In this study, we develop a content-based medical image retrieval (CBMIR) system meticulously designed to cater to seven distinct types of brain tumors as seen in magnetic resonance brain images. Our system is tailored to assist radiologists and healthcare professionals in efficiently retrieving pertinent historical medical images, thereby substantially augmenting the quality of clinical diagnoses and the progress of endeavors in radiology workflow research. The core innovation of our study is the introduction of a state-of-the-art deep learning-based feature extraction algorithm specifically engineered for the CBMIR system. We employ GoogLeNet as the primary architecture for the deep learning network. To further enhance the system's capacity for capturing nuanced and generalized local features, we incorporate generalized-mean pooling. Additionally, we implement an embedding layer to effectively reduce image dimensions. The empirical findings of our research demonstrate the performance and robustness of the proposed CBMIR system. Our system achieves a remarkable mean average precision score of 89.16% and an equally impressive Precision@10 score of 94.08%. These metrics affirm the system's efficacy in retrieving relevant medical images. Furthermore, we seamlessly integrate the CBMIR service into a picture archiving and communication system (PACS) by successfully harmonizing two open-source projects. This milestone marks significant progress toward establishing our CBMIR system as an indispensable tool for both clinical practice and medical research, with the potential to significantly advance brain tumor diagnosis and research.
Background: Cisplatin resistance remains a significant obstacle in cancer therapy, frequently driven by translesion DNA synthesis (TLS) mechanisms that utilize specialized polymerases such as human DN...
Background: Cisplatin resistance remains a significant obstacle in cancer therapy, frequently driven by translesion DNA synthesis (TLS) mechanisms that utilize specialized polymerases such as human DNA polymerase η (hpol η). Although small-molecule inhibitors like PNR-7-02 have demonstrated potential to disrupt hpol η activity, current compounds often lack sufficient potency and specificity to effectively combat chemoresistance. The vastness of chemical space further limits traditional drug discovery approaches, underscoring the need for advanced computational strategies such as machine learning (ML)-enhanced Quantitative Structure-Activity Relationship (QSAR) modeling. Objective: This study aimed to develop and validate ML-augmented QSAR models to accurately predict hpol η inhibition by indole thio-barbituric acid (ITBA) analogs, with the goal of accelerating the discovery of potent and selective inhibitors to overcome cisplatin resistance. Methods: A curated library of 85 ITBA analogs with validated hpol η inhibition data was used, excluding outliers to ensure data integrity. Molecular descriptors spanning 1D to 4D were computed, resulting in 220 features. Seventeen ML algorithms—including Random Forests, XGBoost, and Neural Networks—were trained using 80% of the data for training and evaluated with 14 performance metrics. Robustness was ensured through hyperparameter optimization and 5-fold cross-validation. Results: Ensemble methods outperformed other algorithms, with Random Forest achieving near-perfect predictive performance (training MSE = 0.0002, R² = 0.9999; testing MSE = 0.0003, R² = 0.9998). SHAP analysis revealed that electronic properties, lipophilicity, and topological atomic distances were the most important predictors of hpol η inhibition. Linear models exhibited higher error rates, highlighting the non-linear relationship between molecular descriptors and inhibitory activity. Conclusions: Integrating machine learning with QSAR modeling provides a robust framework for optimizing hpol η inhibition, offering both high predictive accuracy and biochemical interpretability. This approach accelerates the identification of potent, selective inhibitors and represents a promising strategy to overcome cisplatin resistance, thereby advancing precision oncology.
Background: Generative artificial intelligence (AI) is increasingly being integrated into healthcare education and clinical practice. Tools like ChatGPT and Microsoft Copilot can produce clinical summ...
Background: Generative artificial intelligence (AI) is increasingly being integrated into healthcare education and clinical practice. Tools like ChatGPT and Microsoft Copilot can produce clinical summaries, prescribing advice, and patient education materials. However, their outputs often lack source transparency, contextual accuracy, and validation—raising concerns around safety, misinformation, and ethical use. Despite widespread adoption, few real-time evaluation tools exist to help frontline clinicians and educators critically assess AI-generated content. Objective: To introduce and operationalize the A.I. CARES model—a novel, interdisciplinary framework designed to help healthcare professionals, educators, and students evaluate the quality, safety, and ethical soundness of AI-generated clinical content. Methods: This article presents the A.I. CARES model, developed through a review of the current literature, expert practice standards, and clinical education needs. The model includes seven evaluative domains: Accuracy, Inference Speed, Context Understanding, Application Readiness, Reliability, Ethical Use, and Safety. Each domain is supported by guiding questions that can be used in real time to assess generative AI outputs. An evidence matrix based on the Johns Hopkins Nursing Evidence-Based Practice (JHNEBP) tool was used to appraise relevant literature. The model’s application is demonstrated through implementation use cases in both educational and clinical environments. Results: The A.I. CARES model provides a structured, practical method for reviewing AI-generated clinical content across prescribing, documentation, decision support, and patient education use cases. It supports evidence-based reasoning, AI literacy, and professional accountability. When integrated into academic and clinical workflows, the model can reduce over-reliance on AI, mitigate ethical risks, and improve content safety. An implementation-ready one-page reference guide and a domain-specific checklist are included as appendices. Conclusions: The A.I. CARES model addresses a critical gap in the responsible use of generative AI in healthcare by equipping non-technical end-users with a scalable, interdisciplinary tool for content evaluation. It promotes clinical safety, ethical transparency, and contextual integrity in the face of growing AI integration. As AI evolves, the A.I. CARES model provides a foundation for trusted use—from black box to bedside. Clinical Trial: N/A
Background: The linea alba connects the fascia that covers the rectus abdominis muscles serving as the central seam. It also serves as central insertion point for RA and the 3 major abdominal muscles...
Background: The linea alba connects the fascia that covers the rectus abdominis muscles serving as the central seam. It also serves as central insertion point for RA and the 3 major abdominal muscles – transversus abdominis, external obliques and internal obliques. Within the entire length of the rectus abdominis (RA), the inter-recti spacing can vary from 2-3 CM in width and 2cm to 5cm long to 20cm wide Objective: To describe a study protocol comparing the effectiveness of Core stability exercise, static abdominal contraction, yogic diaphragmatic breathing in Reducing Gap between Diastasis Recti Abdominis in Postnatal women Methods: Randomized Controlled Trial with block randomized sampling design will be used for the study. Postnatal women having Diastasis Recti Abdominis will be assigned to experimental or comparison group. Results: Among the groups it’s anticipated that reduction in the gap between recti muscle in the abdomen of postnatal women as a result of intervention. Measurement will be done before and after the intervention which is 8 weeks. Conclusions: This research will provide evidence for various exercises can lead to more personalized treatment plans for postpartum women. This study would provide valuable insights into the comparative efficacy of these methods, helping healthcare professionals make informed decisions about postpartum rehabilitation programs. Clinical Trial: CTRI/2025/06/088140
Background: Qualitative research provides essential insights into human behaviors, perceptions, and experiences in health sciences. The Consolidated Criteria for Reporting Qualitative Research (COREQ)...
Background: Qualitative research provides essential insights into human behaviors, perceptions, and experiences in health sciences. The Consolidated Criteria for Reporting Qualitative Research (COREQ), published in 2007 and endorsed by the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network, substantially advanced transparency of qualitative research reporting. However, the recent rapid integration of large language models (LLMs) into qualitative research introduces novel opportunities and methodological challenges that existing guidelines do not address. LLMs are increasingly applied to tasks ranging from research design and data processing to data analysis and interpretation, and even direct interaction (“conversing”) with qualitative data. Yet their probabilistic nature, their dependence on underlying training data, and susceptibility to hallucinations necessitate dedicated reporting to ensure transparency, reproducibility, and methodological validity. Objective: This protocol outlines the development of COREQ+LLM, an extension to the COREQ checklist, to support transparent and responsible reporting of LLMs’ use in qualitative research. This study aims to: (1) identifying current applications of LLMs in qualitative research; (2) assess how LLM use in qualitative studies in healthcare is reported in published studies; and (3) develop and refine reporting items for COREQ+LLM through a structured consensus process among international experts. Methods: Following EQUATOR Network guidance for reporting guideline development, this study comprises four main phases. Phase 1 is a systematic scoping review of peer-reviewed literature from January 2020 to April 2025, examining the use and reporting of LLMs in qualitative research. The scoping review protocol was registered with the Open Science Framework on June 6th, 2025 (https://osf.io/bk42y) and will adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). Phase 2 will employ a Delphi process to reach consensus on candidate items for inclusion in the COREQ+LLM checklist amongst an interdisciplinary international panel of experts. Phase 3 includes pilot testing and phase 4 publication and dissemination. Results: As of May 2025, the Steering Committee has been established, and the initial search strategy for the scoping review has identified 5,049 records, with 4,201 remaining after duplicate removal. Title and abstract screening is underway and will inform the initial draft of candidate checklist items. The COREQ+LLM extension is scheduled for completion by December 2025. Conclusions: The integration of LLMs in qualitative research requires dedicated reporting guidelines to ensure methodological rigor, transparency, and interpretability. COREQ+LLM will address current reporting gaps by offering specific guidance for documenting LLM integration in qualitative research workflows. The checklist will assist researchers in transparently documenting LLM use, support reviewers and editors in evaluating methodological quality, and foster trust in LLM-supporter qualitative research. By December 2025, COREQ+LLM will provide a rigorously developed tool to enhance the transparency, validity, and reproducibility of LLM-supported qualitative studies.
Background: Women and girls around the world face significant barriers to participating in healthcare decisions, particularly in low- and middle-income countries. Despite growing interest in shared de...
Background: Women and girls around the world face significant barriers to participating in healthcare decisions, particularly in low- and middle-income countries. Despite growing interest in shared decision-making (SDM), little is known about its implementation in these countries and no rigorous assessment of the decision-making needs of women and girls in these contexts has been conducted. There is a lack of SDM trainings and decision-support tools (DSTs) specifically designed for this population. Objective: We aim to co-develop an SDM resource platform, including locally relevant DSTs and instructional materials, to support SDM with women and girls in Brazil, Cameroon, Rwanda and Senegal. Methods: We will conduct a four-phase participatory research project. We will use the Gender-based Analysis Plus (GBA Plus) tool to support inclusive participatory research throughout the study. First, we will follow the Integrated Knowledge Mobilization framework to form a local steering committee in each country, including patients and community representatives, health and social service professionals, and decision makers. Second, we will follow the Ottawa Decision Support Framework to conduct individual interviews or focus groups to identify the decision-making needs of women and girls. We will include 20 women and girls, along with their family members, 15 health and social service professionals, and 15 representatives of community-based organizations in each country (n = 150). The Double Diamond human-centered design framework will be used to co-develop the SDM resource platform. Finally, we will assess the scalability of the resource platform using the Innovation Scalability Self-Administrated Questionnaire (ISSaQ 4.0). We will report our study using the Standards for Reporting Qualitative Research (SRQR) guideline and the Guidance for Reporting Involvement of Patients and the Public (GRIPP-2). Results: The local steering committees will ensure equitable partnerships and a co-construction approach. The SDM resource platform, adapted to the contexts of Brazil, Cameroon, Rwanda and Senegal, should promote more inclusive approaches to SDM worldwide. The scalability assessment will help us reflect on expanding the impact of the platform to other regions. Conclusions: Ultimately, more women and girls from low- and middle-income countries will be involved in healthcare decisions and more clinical teams will be able to integrate SDM into their care practices.
The emergence of generative AI (GenAI) in clinical settings - particularly in health documentation and communication - presents a largely unexplored but potentially transformative force in shaping pla...
The emergence of generative AI (GenAI) in clinical settings - particularly in health documentation and communication - presents a largely unexplored but potentially transformative force in shaping placebo and nocebo effects. These psychosocial phenomena are especially potent in mental health care, where outcomes are closely tied to patients' expectations, perceived provider competence, and empathy. Drawing on conceptual understanding of placebo and nocebo effects and the latest research, this Viewpoint argues that GenAI may amplify these effects, both positive and negative. Through tone, assurance, and even the rapidity of responses, GenAI-generated text - either co-written with clinicians or peers, or fully automated - could influence patient perceptions in ways that clinicians may not currently fully anticipate. When embedded in clinician notes or patient-facing summaries, AI language may strengthen expectancies that underlie placebo effects - or, conversely, heighten nocebo effects through subtle cues, inaccuracies, or potentially via loss of human nuance. This article explores the implications of AI-mediated clinical communication, emphasizing the importance of transparency, ethical oversight, and psychosocial awareness as these technologies evolve.
Background: An estimated 18 million people are living with Rheumatoid Arthritis (RA) in the world (1). The disease comes with significant morbidity for patients, including the increased risk of fractu...
Background: An estimated 18 million people are living with Rheumatoid Arthritis (RA) in the world (1). The disease comes with significant morbidity for patients, including the increased risk of fractures compared to the general population due to the chronic inflammation associated with RA, which can lead to reduced bone mineral density (2).
Presently, disease severity and progression have been helped by the rapid evolution of anti-rheumatic medications. These medications are broadly categorized into two main types: non-steroidal anti-inflammatory drugs (NSAIDs) and disease-modifying antirheumatic drugs (DMARDs) (3).
Patients with RA have an increased risk of post-operative complications of orthopaedic surgery because of its chronic impact on bone as well as the use of immunomodulatory medications that may interfere with bone healing (4). Drugs with immunosuppressive action can lead to potential complications with both wound healing and bone union during elective osteotomy. This is particularly important in foot and ankle surgery, where corrective osteotomies are commonly performed, and the risk of wound breakdown is high. Decisions to continue or suspend taking these medications need to be based on evidence weighing the potential for post-operative complications versus potentiating a disease flare.
There is an abundance of literature that highlights the adverse effects of NSAIDs on bone healing and fracture union, but there is little robust evidence surrounding the use of DMARDs in orthopaedic surgery (3, 5). Patients requiring surgery in trauma or elective settings will often be on one or more of these medications. Therefore, it is vital to understand the effect of DMARDs on postoperative outcomes to improve their recovery and rehabilitation, and if required, to suspend the medications in the perioperative period (6). Current guidance published by the American College of Rheumatology has been formulated for elective hip and knee surgery with a focus on preventing wound complications, typically restarting DMARDs at the 14-day mark when the wound has healed. These guidelines do not consider the time to union of bone, which is typically 6 weeks.
The currently available research has been in vivo or in vitro studies, with little to few studies assessing the clinical implications of DMARDs on bone healing in the rheumatoid patient. Objective: This literature review aims to synthesise and evaluate current evidence on the impact of DMARDs on bone healing. Methods: A literature search was conducted on PubMed, Embase, and Medline. An initial search was conducted looking at the effect of DMARDs or anti-rheumatic medications on bone healing in foot and ankle osteotomies. We used the keywords ‘DMARDs OR anti-rheumatic medications’, ‘foot and ankle surgery OR osteotomy’, AND ‘Bone healing OR bone union’. It yielded only four original papers for review after removing duplicates, case reports, conference abstracts, and non-English Language material. Due to the limited data in this field, we expanded our search and question to look at the effects of DMARDs on bone union in elective and trauma patients.
The keywords were subsequently refined to ‘rheumatic disease OR rheumatoid arthritis’ AND ‘anti-rheumatic medication OR disease-modifying anti-rheumatic medications OR DMARDs’ AND ‘fracture healing OR bony union OR malunion or non-union’. As there was still a limited number of original studies on this theme, we decided to include any study design apart from case studies. These were excluded due to their potential bias and limited generalisability. We also only included papers written in the English language and within the last 50 years. We selected papers that looked specifically at either DMARDs or methotrexate and their effect on bone healing, fracture union, or bone metabolism. The search resulted in 80 papers for review. After applying the above inclusion and exclusion criteria with two independent reviewers, a total of 9 papers were included for a narrative analysis. Results: The effect of methotrexate (MTX) on bone appears to be dose-dependent. Satoh et al showed that new bone formation in a fracture gap in rats did not differ significantly between low-dose MTX and control groups (7). However, there was a marked reduction in bone formation in the high-dose MTX group, particularly periosteal bone formation de novo in the fracture gap site in the first week. The study showed no difference between the three groups for intramedullary bone formation or chondroid tissue formation. A key limitation of this study was that it only looked at bone formation rather than bone strength or mineral density. Several other animal studies support the finding that high-dose MTX has a greater adverse effect on bone metabolism than low-dose MTX (8, 9, 10).
Pountos et al’s systematic review analysed 70 papers of in vivo and animal studies on the effect of MTX on fracture healing (11). The review gave rise to contradictory evidence. Some in vitro studies concluded that MTX reduces mitochondrial activity, bone cell metabolism, and turnover. Other studies showed no effect on osteoblast proliferation, which is a crucial step in bone healing (3). Some studies also showed there was a reduction in biochemical markers of osteogenesis, such as ALP (alkaline phosphatase), while in others, ALP increased (12).
In clinical studies, the impact of DMARDs on bone healing has been studied for patients undergoing elective spinal surgery (13). One study looked at bone fusion rates after craniovertebral junction surgery and found that those who continued DMARDs showed higher radiographic fusion outcomes than those who discontinued (92.8% vs 75%, P value = 0.276). However, the study was not statistically significant due to its small sample size of 30 patients in total (14). Guadiani et al studied revision spinal surgery rates for patients using DMARDs and TNF-alpha inhibitors compared to a control group not on either medication. The reoperation rate within 1 year was 19% for the TNF-alpha inhibitor group and 11% for the DMARD group compared to 6% for the control group. According to the Cox proportional hazard model they used the TNF-alpha group had a 3.1-fold increased risk compared to the control group (95% CI 1.4-7.0), while the DMARD group showed a 2.2-fold increase (95% CI: 0.96-5.3). The reasons for revision surgery were due to infection (40%) or other causes (60%), such as failure to fuse in the DMARDs group, while in the TNF-alpha inhibitor group, it was 47% for infection and 53% for other causes (15). This implies there is a higher rate of infection for the TNF-alpha inhibitor cohort. The authors concluded that continuing DMARDs, especially TNF-alpha inhibitors, 90 days before surgery, appeared to have a higher rate of revision spinal surgery than those who discontinued.
In 2017, the American College of Rheumatology and American Association of Hip and Knee Surgery (ACR/AAHK) performed an extensive meta-analysis of the literature around the use of DMARDs in orthopaedic surgery (10). They advised that conventional DMARDs, which include methotrexate, leflunomide, hydroxychloroquine, and sulfasalazine, can be continued in orthopaedic surgery as they did not lead to adverse post-operative outcomes. However, they recommended holding biologics for two weeks before surgery as there was an increased risk of poor wound healing. The effect on bone healing itself was not studied in this review. There is, in fact, very limited data on the effect of biologic DMARDs on bone healing, but in an in vivo study, they have shown an inhibitory effect on osteoblast proliferation (3). This is particularly true of TNF-alpha inhibitors such as infliximab, which showed a reduction in overall osteoblast cell numbers. This suggests it could interfere with the bone repair and remodelling (3).
Furthermore, the 2021 critical analysis review by Saunders et al on the perioperative management of antirheumatic drugs in foot and ankle surgery also concluded that conventional DMARDS are generally safe to use throughout the perioperative period, while biologics should be held typically before surgery (16). Conclusions: Our narrative review has highlighted an important literature gap within the field of DMARDs and bone healing, whether in a traumatic or elective setting. Much of the original research is in vivo or animal studies, and although they show statistically significant results, they cannot accurately predict human outcomes due to significant differences in physiology and biology (3,8,9,12). Clinical studies are even fewer, and the ones conducted so far have included small study populations. Moreover, they are antiquated and often do not examine the latest anti-rheumatic drugs. For instance, an important study we included, Elia et al, included only 30 patients, which reduced the statistical power of the results (14). All the clinical studies we have included so far are for elective procedures such as spinal surgery or foot and ankle surgery (13,14, 16, 17). To our knowledge, there are currently no randomised controlled trials that study the effect of DMARDs on bone healing, either in a trauma or elective setting.
Nevertheless, there is a greater number of publications available to consider for the effect of DMARDs on wound healing in orthopaedic surgery. This is of important consequence as surgical site infections, especially when involving the bone, can lead to impaired fracture healing, causing malunion or non-union (18). Current evidence suggests MTX has no adverse effect on wound healing in orthopaedic surgery and can be safely continued pre- and postoperatively (10,11). However, biologics are recommended to be held perioperatively due to the increased risk of surgical site infections and impaired wound healing. The current guidance is to schedule surgery at the end of their dosing cycle (10). Some hospital trusts have advised only restarting when most of the wound has healed (19). Considering a wider evidence base, biologics have shown an increased risk of serious infection, so there is certainly a research gap to explore on how these medications impact patient outcomes in orthopaedic surgery (20).
Any decision to stop anti-rheumatic medications in the preoperative period should be a carefully considered decision, with patients fully informed as to the risks and benefits of stopping such therapy. Patients on DMARDs tend to have more severe disease, and withholding them may result in disease flares, which can cause significant morbidity. Flares may lead to joint swelling, stiffness, pain, and increased cardiovascular risk (21). This can ultimately impair rehabilitation following major surgery, which predisposes the patient to further post-operative complications such as venous thromboembolism, hospital-acquired infections, or reduced functional baseline from a prolonged hospital stay (22).
Grennan et al found that those who discontinued MTX two weeks before and after surgery showed a higher rate of flare-ups than those who continued their medication. Patients who continued MTX before surgery had even fewer post-operative complications than the control group that was not on any MTX (23). The 2017 American College of Rheumatology study also concluded that continuing glucocorticoids and DMARDs perioperatively for hip and knee arthroplasty resulted in better function, a greater range of motion, and improved post-operative pain (10).
Therefore, we advise that the decisions around anti-rheumatic medications in patients undergoing orthopaedic surgery should be determined on an individual basis, with consideration given to their disease severity, functional baseline, and risk factors for poor bone healing, as we currently do not have enough evidence to suggest that they should be held.
Our literature review, however, has some limitations. Firstly, we used specific terminology to capture the effect of anti-rheumatic medications on bone healing, so we may have missed articles that contain this information, which did not include our keywords. Secondly, there is such little data available for our topic that the papers we have selected for review have small study populations or no controls. None of the studies we included showed any randomisation. Therefore, results were interpreted with caution as there is a potential for bias and reduced generalisability. Finally, many papers we included were animal studies, so their findings cannot be applied directly to humans.
In conclusion, the effect of DMARDs on bone union remains largely unstudied, especially considering human studies and large randomised controlled trials. Our literature review has identified that MTX may be safe to continue before orthopaedic surgery, as it does not appear to affect bone union at low doses that are used in RA. However, biologics should be withheld as there is evidence to suggest they can cause an increased risk of infection or wound breakdown. The effect of biologics specifically on bone healing, has not been studied to our knowledge. Given that millions of patients suffer from rheumatoid arthritis, and many will at one point undergo a joint procedure, it is important to further understand the clinical impact of DMARDs on bone so we can recommend evidence-based guidance. Until then, we advise a multi-disciplinary approach in determining which anti-rheumatic medications to withhold before any orthopaedic surgery.
Background: Despite the growing use of digital platforms for sexual health education, many tools fail to meet the needs of LGBTQ+ adolescents, who often lack access to inclusive, affirming resources....
Background: Despite the growing use of digital platforms for sexual health education, many tools fail to meet the needs of LGBTQ+ adolescents, who often lack access to inclusive, affirming resources. Artificial intelligence (AI)–enabled chatbots have emerged as promising tools to address these gaps, but concerns remain around bias, usability, and trustworthiness–particularly for queer and trans youth. Objective: This paper describes the development and implementation of an academic-nonprofit partnership between Northwestern University and Planned Parenthood Federation of America (PPFA) to adapt Roo, PPFA’s AI-powered sexual health chatbot, for LGBTQ+ teens. Methods: As part of a larger hybrid effectiveness-implementation trial, the research team collaborated with PPFA to create a customized instance of Roo and gathered feedback from a Youth Advisory Council (YAC) of LBGTQ+ teens via a private Discord server. Using a participatory, research-through-design approach, we analyzed structured qualitative feedback with rapid qualitative analysis to identify content gaps, usability concerns, and trust-related issues. Results: Participants expressed both skepticism and curiosity about AI’s role in delivering sexual health information, offering critical insights on the chatbot’s language, trustworthiness, and relevance. Teens identified key limitations in Roo’s inclusivity, tone, and interface, particularly around trans-specific content, conversational depth, and stigma reduction. These findings informed targeted content updates, interface refinements, and transparency improvements, implemented by PPFA to enhance Roo for broader use. Conclusions: Academic-nonprofit collaborations can leverage participatory methods to enhance digital health tools in real-world contexts. LGBTQ+ teens served not only as testers but as co-designers, shaping the chatbot’s evolution and surfacing broader lessons about trust, AI literacy, and health equity. This partnership offers a scalable model for integrating community voice into the development, evaluation, and implementation of inclusive, AI-enabled health technologies.
Background: Bangladeshi adolescents face significant challenges accessing relevant sexual and reproductive health (SRH) information, with the added burdens of cultural taboo, limited accessibility, an...
Background: Bangladeshi adolescents face significant challenges accessing relevant sexual and reproductive health (SRH) information, with the added burdens of cultural taboo, limited accessibility, and poor communication channels. Traditional adolescent-friendly approaches have shown limited effectiveness in addressing these challenges. In response, Mukhorito was developed as a peer-led, mobile-based digital platform to facilitate SRH, education, and communication among ninth-grade students. Objective: This study explored the feasibility and constraints of piloting the Mukhorito app to enhance adolescent SRH education in Bangladesh. It also sought to determine the self-reported usage, usability, and effect on knowledge and peer communication of the app, as well as to identify implementation and adoption challenges. Methods: Qualitative design was applied in the context of a broader mixed-methods study. Data were collected through six In-Depth Interviews (IDIs), three Key Informant Interviews (KIIs), and one Focus Group Discussion (FGD) from 19 participants, including students, peer leaders, teachers, and government representatives, across three secondary schools in the Feni district. Thematic analysis was conducted using NVivo software under Braun and Clarke's guidelines. Results: The Mukhorito app was received positively for its structured material, interactivity, and easy peer-to-peer communication. Almost all participants shared improved SRH awareness and leadership skills, as well as reduced stigma when discussing delicate topics. The main challenges were limited smartphone access, poor internet connectivity, and affordability. Potential ways to improve the app’s usability were integrating the application into the school curriculum, improving offline functionality, and adding visual material such as short dramas or videos. Conclusions: Mukhorito possesses strong potential as a culturally relevant, digital SRH education tool for Bangladeshi adolescents. The app enabled knowledge and openness in SRH discourse. Alignment with national health programs and enhanced app functionality may promote greater and more sustainable adolescent health.
Background: Alerts, a key feature of Electronic Health Record (EHR) systems, intend to improve patient safety by providing timely information at the point of care. However, many EHR systems generate e...
Background: Alerts, a key feature of Electronic Health Record (EHR) systems, intend to improve patient safety by providing timely information at the point of care. However, many EHR systems generate excessive alerts that are not immediately clinically relevant and that contribute to alert fatigue. Despite growing recognition of alert fatigue as a safety concern, clinicians’ experiences of alert fatigue and the broader system-level factors that contribute to it being experienced are not well understood. Objective: Use a human factors approach to comprehensively explore how alert fatigue is experienced by doctors, identify alert fatigue’s contributing factors, perceived influences and impacts, and strategies to address it in practice. Methods: Semi-structured interviews were conducted with junior doctors working in hospitals across Australia. Data were thematically analysed using a hybrid inductive and deductive approach, informed by the Safety Engineering Initiative for Patient Safety (SEIPS) and an information processing model. Results: Twenty doctors were interviewed. Alert fatigue was described to occur at different stages of information processing, including when alerts were not detected, superficially processed using mental shortcuts, or required excessive cognitive effort to interpret. When alerts were not detected or thoroughly processed, participants more often perceived impacts on patient safety and care quality, whereas when alerts required excessive cognitive effort interruptions, frustration, and time and effort loss were frequently reported. Contributors to alert fatigue were reported to include technology, task, and environmental factors such as the interface design and clinical relevance of alerts, and information overload from system alerts as well as other alerts and tasks. Alert fatigue was described to be experienced differently depending on provider characteristics, such as experiences with and knowledge of alerts, mood, and personality, and organisational factors including culture, shift type and time of day. Conclusions: Alert fatigue is not a binary concept but instead experienced on a continuum and influenced by interacting individual, technical and contextual factors. Addressing alert fatigue requires tailored interventions that target its different causes and outcomes. These could include technical and design improvements, changes to organisational practices, and individual customisation to reduce experiences of fatigue and accommodate differences in clinicians’ needs.
Background: The growing trend of integrated healthcare services within physician groups has improved care delivery by enhancing convenience, efficiency, and care coordination. However, it has also rai...
Background: The growing trend of integrated healthcare services within physician groups has improved care delivery by enhancing convenience, efficiency, and care coordination. However, it has also raised concerns about financial incentives potentially driving overutilization. Objective: We examine the impact of distribution method (traditional third-party referral versus physician-managed via Rx Redefined technology platform) on the quantity of urinary catheters supplied to Medicare patients. Methods: We analyzed utilization patterns for urological catheters (HCPCS codes A4351, A4352, and A4353) using 2021 Medicare claims data. We identified 54 urology specialists in core metropolitan areas who were enrolled in the Rx Redefined platform throughout 2021 and compared their utilization patterns with unenrolled urologists in the same regions. For enrolled physicians, who managed approximately 40 percent of their prescriptions through the platform, we also compared utilization between physician-managed and third-party distribution methods. Results: For catheter services A4351 and A4352, when distribution was managed by third parties, we found no significant differences in utilization (i.e. units supplied) between enrolled and unenrolled physicians. However, physician-managed distribution through Rx Redefined resulted in significantly lower utilization compared to third-party vendor distribution by non-enrolled physicians (p < 0.001 for both codes). In paired analysis of enrolled physicians, direct management showed significantly lower utilization compared to third-party distribution for A4351 (p = 0.014), but this difference was not significant for A4352 (p = 0.62). Conclusions: These findings demonstrate that physician-managed catheter distribution does not lead to increased utilization. In fact, for certain catheter types, physician-managed distribution may result in lower utilization compared to traditional third-party referral methods, suggesting a potential reduction in oversupply and improved efficiency.
Background: Current methods for analyzing and matching shapes frequently struggle to distinguish subtle structural variations, particularly under conditions involving noise, deformation, or articulati...
Background: Current methods for analyzing and matching shapes frequently struggle to distinguish subtle structural variations, particularly under conditions involving noise, deformation, or articulations. Existing algorithms often lack robustness and flexibility, relying heavily on local curvature, which may inadequately represent complex structural details essential for precise shape classification and matching. Objective: To develop a robust and versatile three-tier shape representation pipeline that enhances intra-group similarity and amplifies inter-group differences, thereby providing an invariant representation resilient to noise, articulations, and mechanical deformations. Methods: We propose a novel approach comprising three steps: (1) a manifold-reduction step employing stress minimization to neutralize shape deformations, (2) application of the eccentricity transform (Ecc) to incorporate internal structural information, and (3) integral invariants (II) for robust boundary description. This tripartite framework synergizes differential geometry, topology, and scale-space theory, rigorously evaluated on standard datasets such as the Kimia database. Results: Our method significantly outperformed existing shape-matching algorithms, demonstrating notably improved intra-group matching accuracy and effectively enhancing inter-group discrimination. The approach provided substantial resilience against noise, articulations, and bending-induced shape distortions, verified through extensive experimentation and statistical evaluation. Conclusions: The proposed three-tier invariant representation delivers a robust and mathematically sound pipeline suitable for precise shape matching and classification tasks. Its resilience to common shape-analysis challenges makes it highly suitable for practical applications in computational anatomy, biomechanics, medical imaging, and computer-aided geometric design. Clinical Trial: Not applicable (omit if required).
Background: Diabetic retinopathy (DR) is a serious complication of diabetes and it affects the retinal blood vessels, leading to vision impairment and, in severe cases, blindness. Unfortunately, DR is...
Background: Diabetic retinopathy (DR) is a serious complication of diabetes and it affects the retinal blood vessels, leading to vision impairment and, in severe cases, blindness. Unfortunately, DR is irreversible, and available treatments can only help preserve existing vision rather than restore lost sight. Objective: Early detection is pivotal, yet traditional diagnostic methods depend on retinal fundus imaging and ophthalmologists' expertise faces significant challenges. These include high costs, long detection times, and the risk of misdiagnosis. That may delay treatment and increase the likelihood of blindness. Moreover, existing diagnostic modalities exhibit suboptimal efficacy in accurately detecting and managing diabetic macular edema (DME), a predominant etiological factor in vision impairment. Methods: Recent advancements in artificial intelligence (AI) and deep learning (DL) have significantly improved the detection and classification of DR. DL, particularly in medical image analysis, has demonstrated remarkable sensitivity, specificity, F1-score, and AUC. Results: Techniques such as transfer learning, transformer learning, and customized DL models have further enhanced DR detection using color fundus images. These state-of-the-art methods offer a more accurate, faster, and cost-effective alternative to traditional approaches. Conclusions: This article reviews recent developments in DL-based DR detection, discusses existing challenges, and provides recommendations for future improvements. Strengthening AI-driven detection systems is essential to reducing vision loss among diabetic patients and ensuring more reliable, accessible, and early diagnosis of DR. Clinical Trial: Nill
Background: Diabetic foot ulcers (DFU) represent a severe complication that can increase morbidity and mortality in diabetic patients. Effective management of DFU requires accurate and prompt wound as...
Background: Diabetic foot ulcers (DFU) represent a severe complication that can increase morbidity and mortality in diabetic patients. Effective management of DFU requires accurate and prompt wound assessment. However, the need for proper management of DFU necessitates wound assessments that are both swift and accurate, a challenge that persists in current clinical practice. Objective: This study explores the application of AI-based assessment models in evaluating DFU conditions, aiming to enhance detection accuracy, transparency in medical decision-making, and the effectiveness of real-time patient monitoring and care. Methods: A scoping review methodology based on the PRISMA-ScR framework was used to identify, select, and summarize literature on the use of AI in DFU assessment. Literature was sourced from PubMed, ProQuest, and Scopus using keywords like diabetic foot ulcer, Artificial Intelligence, and wound assessment. Results: AI models demonstrate high accuracy in risk prediction, detection, segmentation, and classification of diabetic foot ulcers (DFU), with some models achieving up to 99% accuracy. Smart applications and deep learning-based systems have proven to be reliable and comparable to clinical evaluations, enhancing efficiency and transparency in DFU management. Conclusions: The development and application of AI-based models in DFU assessment and monitoring improve diagnostic effectiveness and accuracy while supporting more transparent and timely medical decisions.
Background: Acute aortic syndrome is a rare but life-threatening clinical syndrome that can rapidly progress to aortic rupture and death. Symptoms are vague and non-specific, making it challenging to...
Background: Acute aortic syndrome is a rare but life-threatening clinical syndrome that can rapidly progress to aortic rupture and death. Symptoms are vague and non-specific, making it challenging to identify. Objective: We aimed to evaluate prediction models to help clinicians identify acute aortic syndrome based on the data available at the time of presentation. Methods: We combined two existing national datasets of signs and symptoms gathered from patients with and without acute aortic syndrome, from over 30 UK healthcare centres (n = 6,168). Sample incidence was 10.1% (n = 634) against a symptomatic population incidence of 0.26%. We fitted 4,776 prediction models to an 80% ‘training’ split of the data, and then tested on the remaining 20% ‘test’ split. Sensitivity, overall net benefit, and informedness (using Youden’s J) were calculated to represent the perspectives of the clinician, the patient, and the decision modeller. Results: The most-common performance was for models to show little to no sensitivity or informedness (< 0.1) and negative overall net benefit. Models with high sensitivity (>0.8) had a range of informedness values, including 0. The only models that had a positive overall net benefit all used the same rule that labelled everyone as having acute aortic syndrome. These “yes to all” models had a sensitivity of 100%, an overall net benefit of only 10%, and informedness value of 0. Conclusions: The perspectives of the clinician, the patient, and the decision modeller need to be considered when developing prediction models for decision support. No model performed well on all evaluation statistics. Difficult trade-offs are revealed, which are exacerbated for rare and severe conditions, such as acute aortic syndrome.
Background: Background: Tetraplegia, often resulting from cervical spinal cord injury (SCI), may lead to significant motor and sensory loss, severely impacting independence and quality of life. Assist...
Background: Background: Tetraplegia, often resulting from cervical spinal cord injury (SCI), may lead to significant motor and sensory loss, severely impacting independence and quality of life. Assistive technologies (ATs), such as wheelchair-mounted robotic arms (WMRAs), offer potential to enhance autonomy in daily living. However, adoption remains limited due to high costs, complex controls, and insufficient end-user involvement. Robust evidence on their real-world effectiveness, particularly post-hospitalisation, is still lacking. Objective: Objektives: This study explores the real-life use of a WMRA for individuals with tetraplegia. It aims to evaluate its support in activities of daily living (ADLs), assess usability and satisfaction, and conduct a preliminary health economic analysis comparing cost-effectiveness and quality of life outcomes with standard care. Methods: Methods: This study will be conducted in post-hospitalisation settings in Switzerland. Up to 15 participants with upper limb impairments (SCI C0–Th1, AIS A–D) using powered wheelchairs will be recruited. They will use the robotic arm for six consecutive days. An equal number of participants will be recruited for the economic analysis group. A mixed methods approach will combine quantitative data collected via standardised questionnaires (PSSUQ, NASA-TLX, EQ-5D-5L, VAS, aCOMP, CSSRI-EU) at baseline and post-intervention, along with qualitative feedback gathered through an informal questionnaire and semi-structured interviews. Feasibility will be assessed through task performance and health economic analysis. The latter will include quality-adjusted life years (QALY), which quantify quality and length of life, and modelling the Incremental Cost-Effectiveness Ratio (ICER), which compares the cost-effectiveness of the intervention based on cost per QALY gained. Results: Results: Recruitment was initiated in April 2025, with the enrolment period expected to conclude in December 2025. As of June 2025, no participants have been enrolled. We expect the robotic system to reduce caregiver time and associated costs, while enhancing autonomy, quality of life, and mental well-being. Potential technical and recruitment challenges have been identified and mitigation strategies planned. By evaluating real-life use of a WMRAs, this study may support the broader adoption of assistive robotic technologies. Conclusions: Conclusion: This research offers key insights into the feasibility, usability, and economic value of robotic assistance for individuals with tetraplegia and will help inform future development and scale-up studies.
Background: We present a digital phenotyping protocol designed to continuously and objectively measure behavioral, physiological, and contextual data during pregnancy and postpartum periods using pass...
Background: We present a digital phenotyping protocol designed to continuously and objectively measure behavioral, physiological, and contextual data during pregnancy and postpartum periods using passive sensing from Garmin smartwatches and smartphones, along with active ecological momentary assessments (EMAs). This novel protocol uniquely adapts to the unpredictable timing of childbirth, spanning from the third trimester through six weeks postpartum, to accurately capture critical temporal changes and maternal-infant outcomes. By providing high-frequency real-time data, this methodology offers comprehensive insights into pregnancy-related behaviors and physiological processes, overcoming limitations of traditional retrospective self-report methods. Objective: The objective is to develop a protocol for longitudinal data collection supporting digital phenotyping that is optimized for pregnancy and postpartum. This protocol leverages the pregnant population’s heightened interest in health and tracking. This protocol aims to minimize burden on the participants, increase retention, and assess the value of wearables compared to smartphones to determine appropriate data collection methods. Methods: Data will be collected on 30 nulliparous participants from the start of the third trimester through 6 weeks postpartum. This protocol utilizes three distinct one-time surveys, alongside daily and weekly Ecological Momentary Assessments (EMA), to capture real-time maternal experience data. Passive maternal data - such as activity, vitals, sleep, location - are collected via smartphone and Garmin smartwatch. Participants are expected to log data about the newborn after delivery through the mobile application Huckleberry. This protocol was developed in collaboration between the Northeastern University SATH Lab who focus on digital phenotyping and longitudinal data collection and Tufts Medical Center Obstetrics and Gynecology who have expertise working with the pregnant population. Results: The planned completion date is December 2026, with a manuscript published afterward. We plan to assess retention rates, survey and EMA completion rates, track wear time of smartwatch without intervention, and data volume logged in Huckleberry. Conclusions: This protocol integrates the use of digital phenotyping in pregnancy and postpartum research, providing a novel method for capturing real-time maternal well-being indicators. It will determine expected rates of data completion and appropriate sample size using a power analysis for a more extensive future study. By integrating smartphone and wearable sensor data, this protocol has the potential to transform the way maternal health clinical interventions are designed and implemented in the future.
Background: COVID-19 forecasting models have been used to inform decision making around resource allocation and intervention decisions e.g., hospital beds or stay-at-home orders. State-of-the-art fore...
Background: COVID-19 forecasting models have been used to inform decision making around resource allocation and intervention decisions e.g., hospital beds or stay-at-home orders. State-of-the-art forecasting models often use multimodal data such as mobility or socio-demographic data to enhance COVID-19 case prediction models. Nevertheless, related work has revealed under-reporting bias in COVID-19 cases as well as sampling bias in mobility data for certain minority racial and ethnic groups, which affects the fairness of the COVID-19 predictions among racial and ethnic groups. Objective: To introduce a fairness correction method that works for forecasting COVID-19 cases at an aggregate geographic level. Methods: We use Hard and soft error parity analysis results on existing fairness frameworks and to show our proposed method DemOpts, performs better in both scenarios. Results: In this paper, we first show that state of the art COVID-19 deep learning models output mean prediction errors that are significantly different across racial and ethnic groups at larger geographic scales. Then, we propose a novel de-biasing method, DemOpts, to increase the fairness of deep learning based forecasting models trained on potentially biased datasets. Our results show that DemOpts can achieve better error parity than other state of the art de-biasing approaches, thus effectively reducing the differences in the mean error distributions across more racial and ethnic groups. Conclusions: We introduce DemOpts which reduces the error parity as compared to other approaches and generates fair forecasting model as compared to other approaches in literature.
Background: Developmental disabilities significantly impact children and impose substantial caregiving demands on parents, who often face emotional strain, isolation, and disrupted routines. Despite e...
Background: Developmental disabilities significantly impact children and impose substantial caregiving demands on parents, who often face emotional strain, isolation, and disrupted routines. Despite evidence that parent-support interventions enhance well-being and caregiving outcomes, there is limited synthesis of occupational therapy-related support programs specifically designed for parents of children with special needs. Objective: This scoping review aims to map existing evidence and identify gaps in support programs, emphasizing the importance of family-centered care and the unique contributions of occupational therapy to empower parents. Methods: A scoping review using the Joanna Briggs Institute’s methodology will systematically identify, map and analyze occupational therapy-related support programs for parents of children with special needs. A three-phase search strategy will include databases such as PubMed, Scopus, CINHAL, and Embase. Studies meeting the inclusion criteria- parents of children aged 3-15, peer-reviewed articles published in English, and focusing on various intervention types and contexts- will be analyzed. Results: Extracted data will be synthesized narratively and tabulated, highlighting program characteristics and outcomes to inform future evidence-based intervention Conclusions: A scoping review is therefore essential to provide an evidence-based foundation for designing impactful support programs that enhance family-centered care within the framework of occupational therapy. Clinical Trial: NA
Background: Sri Lanka has a well-established National Blood Transfusion
Service that provides quality assured blood bank service.
However, the information flow is inefficient and less utilized for...
Background: Sri Lanka has a well-established National Blood Transfusion
Service that provides quality assured blood bank service.
However, the information flow is inefficient and less utilized for
evidence-based decision-making. The statistics unit of National
Blood Centre is unable to produce Annual Statistics Report
timely due to the difficulty in analysing and making reports
manually utilizing the considerable amount of data collected
throughout the year. To address this, an electronic Health
Information Management System was proposed as a solution for
the inefficiency of the data flow for statistical purposes. Objective: 1. General Objective
Facilitate decision-making by developing, implementing and
evaluating an electronic information management system to
capture monthly statistics data from island wide blood banks.
2. Specific Objectives
Identify the requirements of the system (MSR-NBTS)
Customize DHIS2 to fulfil the identified
requirements
Testing and hosting the system at National Blood
Centre Narahenpita
Evaluation of usability and cost-effectiveness of the
system Methods: A Monthly Statistics Reporting System was designed and
developed using DHIS2, which is a Free and Open Source
Software (FOSS) to fulfil the requirements of the National Blood
Transfusion Service. To evaluate the new system, a qualitative
study was conducted using semi-structured interviews amongst
a selected study population of 17 participants within the NBC
Cluster, which includes 11 blood banks in Colombo area. The
gathered data was analysed using a thematic analysis techniques
and the emerging categories and themes were used in the
subsequent discussions. Results: Problems of calculation, usability, reliability, utilization of
data and availability of reports were identified in the paper
based system. Results shows that the new electronic system has
high usefulness, ease of use, ease of learn, satisfaction and cost
effectiveness with accepted enhanced features of the interface.
According to the interviews, participants expressed that the
likelihood of using this system in the future is high. Conclusions: Almost all the participants in this research readily accepted
new electronic information management system. Therefore, it
will assure the sustainability of the new system. Because of the
real time updated dashboard, it will help most of the blood bank
functions by facilitating administrative decision-making
efficiently.
Background: The growing burden of joint disorders, driven by the aging population, highlights the need for efficient surgical intervention. Clinical pathways (CPWs) standardize care, improve outcomes,...
Background: The growing burden of joint disorders, driven by the aging population, highlights the need for efficient surgical intervention. Clinical pathways (CPWs) standardize care, improve outcomes, and optimize resource use in orthopedic surgery. Objective: This narrative review examines the pivotal role of clinical pathways (CPWs) in knee and hip arthroplasties, essential procedures for enhancing patient quality of life. Methods: We conducted a narrative review focusing on how CPWs impact sustainability, quality, and resource management in knee and hip arthroplasties, integrating literature from PubMed and Cochrane databases according to PRISMA guidelines. Results: Enhanced Recovery After Surgery (ERAS) pathways and virtual clinics have significantly improved hospital discharge timelines, resource utilization, and patient satisfaction. Despite these benefits, challenges remain in balancing standardized care with individual patient needs. Conclusions: This review highlights the importance of CPWs in improving healthcare delivery and patient outcomes in orthopedic surgery. Future efforts should focus on refining CPWs, integrating digital tools, and maintaining flexibility to adapt to evolving healthcare demands.
Background: Emergency department (ED) overcrowding is a significant global challenge with profound implications for patient outcomes, healthcare delivery, and public health. Addressing this issue requ...
Background: Emergency department (ED) overcrowding is a significant global challenge with profound implications for patient outcomes, healthcare delivery, and public health. Addressing this issue requires comprehensive monitoring of patient flow, supported by a well-structured system of performance indicators. Identifying the root causes of overcrowding is crucial for developing targeted, evidence-based indicators to guide national policies. Hence, this study was conducted to systematically review the indicators used across different countries to measure ED overcrowding, aiming to inform strategies for improving ED capacity management and optimizing patient care. Objective: The primary objective of this study is to systematically identify and outline the indicators used to evaluate ED overcrowding across a range of hospital settings globally. Methods: A scoping review was conducted from October to November 2023. The selected articles were based on predefined criteria. The inclusion criteria require the articles reported in English and related to the keywords, published between 2013 and 2023, and include any study design (qualitative or quantitative). The databases used were PubMed, Emerald Insight, Google Scholar, and Scopus. The identified indicators were descriptively categorised according to input, throughput and output components based on the ED crowding model framework by Asplin et al. 2003 and summarised based on the indicators ranked from frequently used to the least. Results: Out of 1,347 articles screened, 117 articles were included in the study. A total of 314 indicators were retrieved and then consolidated into 26 distinct indicators. The majority (68.8%) fall within the throughput component, followed by 19.7% in the output component, while the input component accounts for the smallest proportion at 11.5%. Conclusions: This study highlights that throughput indicators were more prominently studied as key metrics in measuring ED overcrowding. The most frequently utilised throughput (TP) indicator is the ED length of stay, followed by waiting time and the rate of patients leaving without being seen. The review further demonstrates that length of stay (LOS) serves as a critical marker of systemic bottlenecks and operational inefficiencies within EDs. The findings provide valuable insights for policymakers to refine and strengthen existing indicators, helping to address and mitigate the issue of ED overcrowding.
Background: Systems psychodynamics provide valuable insights into organizational development. To date, instruments that can reliably assess organizations based on systems psychodynamic theories are la...
Background: Systems psychodynamics provide valuable insights into organizational development. To date, instruments that can reliably assess organizations based on systems psychodynamic theories are lacking, though. The Systematic Multidimensional Organisational Assessment (SyMOA) is a qualitative instrument that provides an in-depth, systems psychodynamic analysis of organizational dynamics by using a semi-structured interview guide. To complement the method, a standardized, quantitative self-assessment questionnaire will be developed and validated. Objective: The aim of this study is to develop and psychometrically validate an instrument for assessing organizational health based on the SyMOA diagnostic system. The questionnaire is intended to provide a scientifically grounded yet practical diagnostic tool applicable in both research and corporate practice. The findings aim to contribute to the advancement of systems psychodynamic theory and serve as a foundation for evidence-based interventions in organizational change processes. Methods: The study follows a multi-stage development and validation process. First, the SyMOA construct will be transformed into a questionnaire battery and the items will be evaluated by experts (expert validity). The items will be tested through factor and item analyses in an online panel (n=150) and iteratively refined. Subsequently, factorial validity, discriminant validity, and test-retest reliability will be examined before standardizing the instrument with a larger sample (n=800). Results: As of April 2025, a first draft of 158 items was developed based on Dimension I of the SyMOA framework. The draft underwent an expert review process with two experts in psychodynamics, who provided feedback on content validity and conceptual alignment. Approximately 20% of the items were revised to improve clarity and theoretical precision. Data collection using a panel is scheduled for the coming months, with iterative item analysis to be conducted thereafter. Results are expected to be published in late 2025. Conclusions: Parallel field application of the SyMOA framework in organizational settings complements the quantitative development by offering insights into its real-world relevance and usability. This integration underscores the instrument’s translational value, while also illustrating the practical challenges of applying systems psychodynamic diagnostics in organizational contexts. Clinical Trial: Freiburger Register Klinischer Studien FRKS005727
Background: Mental health disorders are a growing public health concern among university students globally and in India, exacerbated by stigma and limited access to care. Mobile health (mHealth) apps...
Background: Mental health disorders are a growing public health concern among university students globally and in India, exacerbated by stigma and limited access to care. Mobile health (mHealth) apps offer a potential solution, but user engagement and cultural relevance remain challenges. This pilot study evaluates "Here for You," a mental health screening app co-developed with Indian university students to provide accessible, non-stigmatizing support. Objective: This study aimed to: (1) Describe the user-centered co-development and pilot testing process of the "Here for You" app; (2) Evaluate the app's feasibility, user acceptability, and engagement within the target population; and (3) Assess the concurrent validity of the app's screening tool (DASS-21) against established clinical measures (HAM-D, HAM-A, PSS). Methods: The study employed a four-phase, user-centred design involving students with lived mental health experience, clinicians, and developers. A purposive sample of 30 university students (mean age 21±1.8 years, 50% female) diagnosed with depression, anxiety, or stress participated. Participants completed the Depression, Anxiety, and Stress Scales-21 (DASS-21) via the app and underwent clinical assessments using HAM-D, HAM-A, and PSS scales. User experience was evaluated using the User Mobile App Rating Scale (UMARS) and qualitative feedback. Data analysis included Pearson correlations and thematic analysis. Results: App-based DASS-21 scores showed strong correlations with clinician-administered scales: HAM-D (r=0.819, p<0.001), HAM-A (r=0.887, p<0.001), and PSS (r=0.972, p<0.001), indicating high concurrent validity. The app received high usability ratings (mean UMARS score 4.4/5), particularly for functionality (4.7/5) and aesthetics (4.5/5). Qualitative feedback highlighted usability and enhanced privacy due to features like "Quick Exit," cultural resonance, and the desire for integrated support features. The co-design process directly addressed student concerns, implementing features like simplified language and crisis support links. Conclusions: This pilot study demonstrates the feasibility, validity, and user acceptability of the "Here for You" app, co-developed using a participatory approach with Indian university students. By integrating user experience, clinical rigor, and ethical safeguards like adherence to DPDP guidelines, the app offers a culturally resonant and scalable model for digital mental health screening in low-resource settings. This approach underscores the value of the "nothing about us without us" principle in developing effective mHealth interventions.
Background: Technology-assisted and robotic rehabilitation methods are increasingly used. Still, scarce evidence exists on their effects on upper extremity functioning after spinal cord injury. Object...
Background: Technology-assisted and robotic rehabilitation methods are increasingly used. Still, scarce evidence exists on their effects on upper extremity functioning after spinal cord injury. Objective: To evaluate the effects and feasibility of a 6-week intervention focusing on technology-assisted upper extremity rehabilitation in adults with incomplete cervical spinal cord injury (SCI). Methods: In this pilot randomized controlled crossover trial, 20 participants (10 men, 34–73 years of age, 1–8 years since SCI) were recruited by mail and randomized into two sequences (AB, n=10 and BA, n=10). All participants received a 6-week rehabilitation intervention during Period 1 or Period 2. The intervention was delivered 3 times a week for 6 weeks (18 sessions) by occupational therapists specialized in spinal cord injuries and neurorehabilitation. Each 1-hour rehabilitation session included a minimum of 30 minutes of technology-assisted rehabilitation using AMADEO®, DIEGO®, and/or PABLO® devices. Other occupational therapy activities were allowed to complete the session.
The effects of the 6-week rehabilitation intervention were compared to 6 weeks of no-intervention. Analyses were based on paired data. Each participant served as their own control. Hand and arm function were evaluated using the Action Research Arm Test, the American Spinal Injury Association – Upper Extremity Motor Score (ASIA-UEMS), grip strength, pinch strength, and the Spinal Cord Independence Measure – Self Report, and rehabilitation goal attainment by the Goal Attainment Scale (GAS). Face-to-face assessments were conducted at baseline, after Period 1, after Period 2, and at 6 months, except for the GAS, that was used at the beginning and immediately after the rehabilitation intervention. Results: The rehabilitation intervention showed good feasibility and tolerability in adults with incomplete cervical spinal cord injury. Of 20 (10+10) participants (median age 62, IQR 58–66), 19 enrolled in the study, and 17 completed at least 80% of the rehabilitation sessions. Fourteen out of 16 participants included in the final analysis attained their rehabilitation goals. The goals were mainly focusing on “fine hand use”, and “hand and arm use” related to self-care and domestic life. The effects of the rehabilitation intervention did not differ from no-intervention, except for the ASIA-UEMS in participants (n=7) who received the rehabilitation during Period 2. The sum score change of participants in Sequence BA was median 0 (-2–0) after no-intervention and 1 (0–2) after the rehabilitation intervention (P=.04). Conclusions: Results of this pilot study suggest that technology-assisted upper extremity rehabilitation provided by occupational therapists is safe and has potential for broader clinical use in adults with incomplete cervical spinal cord injury. The rehabilitation intervention showed good feasibility and positive outcomes, especially in rehabilitation goal attainment. Still, the results need to be confirmed in a larger randomized controlled trial. Clinical Trial: ClinicalTrials.gov NCT04760470
Background: Most medical schools do not require anesthesiology as part of their clerkship curricula, limiting student exposure to the specialty. Objective: This study aims to investigate whether the C...
Background: Most medical schools do not require anesthesiology as part of their clerkship curricula, limiting student exposure to the specialty. Objective: This study aims to investigate whether the California Anesthesiology Medical Student Symposium (CAMSS), a one-day conference composed of anesthesiology lectures and workshops led by residency program leaders, can increase student knowledge or interest in anesthesiology. Methods: The Annual CAMSS of 2022 was organized at University of California Irvine School of Medicine by medical students and residency program leaders. An online survey was distributed to all registered students three days prior to the conference and immediately afterwards. Student exposure, knowledge, and interest in anesthesiology were evaluated using Likert-scales. Pre-conference versus post-conference results were analyzed using two-sample t-tests with a p-value < 0.05 considered as statistically significant. Results: The pre-conference survey was emailed to all 96 students who registered for the conference, 68 of which completed the survey (response rate 70.8%). The post-conference survey was emailed to all 83 students who attended the conference, 51 of which completed the survey (response rate 61.4%). On a Likert scale of 1-10, post-conference survey responses revealed a statistically significant increase in self-perceived knowledge of anesthesiology compared to pre-conference surveys (mean 6.44, SD 1.79 vs. mean 4.71, SD 2.07 respectively; p < 0.001). Conclusions: A one-day anesthesiology-focused conference can increase medical students’ self-perceived knowledge of the specialty’s multifaceted role in the hospital setting. Clinical Trial: This prospective cohort observational study was approved by University of California, Los Angeles Medical Institutional Review Board (IRB) # 21-001825.
Background: Digital shared medication records (DSMRs) are promoted to improve medication management across care settings, but implementation remains slow and challenging. Existing systems often fail t...
Background: Digital shared medication records (DSMRs) are promoted to improve medication management across care settings, but implementation remains slow and challenging. Existing systems often fail to reflect patient-led changes, raising questions about why national initiatives do not allow patients or family caregivers to be directly involved in updating shared information. At the same time, little is known about how patients perceive these tools and what they expect. Public and patient involvement in the design of such systems has been minimal, leaving a critical gap in user-centered evidence to guide implementation. Objective: This study aimed to develop and pilot test a discrete choice experiment (DCE)-based survey instrument to assess patient preferences and estimated uptake of DSMRs. The tool is intended to inform the co-design of digital medication records that align with patient needs and support broader stakeholder decision-making. Methods: We developed the survey instrument in three phases. First, we identified relevant DSMR features from scientific literature and Swiss policy and technical documents. Second, we conducted a stakeholder and expert prioritization exercise to select attributes for the DCE. Third, we refined the attributes and levels through think-aloud interviews with patients. The final survey included the DCE, items on potential adoption factors, and questions addressing current policy concerns. We pilot-tested it online with 300 patients who regularly take multiple medications. Results: An initial list of 31 concepts was refined into 17 dimensions, ultimately yielding seven key DSMR attributes for the pilot: content, update responsibility, access rights, tool purpose, additional features, data protection, and financial incentives. Choice model estimations confirmed expected preference directions. Financial incentives, responsibility for updating, and data protection had the strongest influence on uptake, followed by content and primary purpose. Access rights and extra features were less impactful. Respondents favored collaborative medication plan management involving both patients and professionals over professional-only approaches. Conclusions: The instrument demonstrated strong potential for larger-scale use in Switzerland, with minor adaptations recommended for other settings. Health authorities and innovators can use this tool to test DSMR design and implementation strategies while generating context- and population-specific insights that would otherwise require costly and time-intensive evaluations. This approach supports strategic planning, including simulations to tailor implementation across subgroups. Such foresight can help optimize investments and reduce the risk of widening health inequities and digital divides. More broadly, the instrument provides a practical method for engaging the public in digital health policymaking and co-creating patient-centered services.
Background: Distinguishing bipolar disorder (BD) from attention-deficit/hyperactivity disorder (ADHD) and other common psychopathology in adolescents remains a diagnostic challenge due to overlapping...
Background: Distinguishing bipolar disorder (BD) from attention-deficit/hyperactivity disorder (ADHD) and other common psychopathology in adolescents remains a diagnostic challenge due to overlapping symptoms of mood and activity fluctuations Objective: To investigate same-day correlations between actigraphy-derived physical activity and self-reported mood/energy ratings, and to evaluate whether these measures can support differential diagnosis of BD, ADHD, and other psychopathologies in adolescents using both traditional statistics and machine learning. Methods: We included 209 hospitalized adolescents (2,148 patient-days) across four diagnostic groups: BD without ADHD (n=34), ADHD without BD (n=54), BD+ADHD (n=42), and Other Diagnoses (n=79). Actigraphy data were categorized into quartiles (Max1st–Max4th, Min1st–Min4th), and MET scores (-10 to +10) were classified into severity ranges (OK [<3], Mild [3–4], Moderate [5–6], Severe [>6]). Non-parametric analyses (Kruskal-Wallis, Mann-Whitney U) assessed group differences, and categorical associations were evaluated using chi-square tests with Cramér’s V effect sizes. To account for multiple comparisons, Bonferroni correction was applied (p-corrected < 0.004). Three machine learning models—XGBoost, Random Forest, and Ridge Regression—were developed to predict daily mood (MoodMean) from actigraphy-derived and self-reported energy features. Results: This study analyzed 2,148 inpatient days from 209 adolescents across four diagnostic groups (BD without ADHD, BD+ADHD, ADHD without BD, and Other Diagnoses). Actigraphy data were transformed into maximum and minimum quartiles, and daily mood and energy ratings were recorded using the Mood & Energy Thermometer (MET). Non-parametric tests and chi-square analyses with Cramér’s V were used to assess group-level differences and associations. Three machine learning models—XGBoost, Random Forest, and Ridge Regression—were developed to predict daily mood (MoodMean) from actigraphy-derived and self-reported energy features. Feature importance was analyzed within and across diagnostic groups. Conclusions: : Our findings supported importance of digital phenotyping where objective actigraphy combined with self-report energy metrics can accurately estimate mood and improve differential diagnosis and may personalize interventions in youth.
Background: Society guidelines for prostate cancer screening via PSA testing serve to standardize patient care, and are often utilized by trainees, junior staff, or generalist medical practitioners to...
Background: Society guidelines for prostate cancer screening via PSA testing serve to standardize patient care, and are often utilized by trainees, junior staff, or generalist medical practitioners to guide medical decision-making. Adherence to guidelines is a time-consuming and challenging task and rates of inappropriate PSA testing are high. Objective: This study evaluates a retrieval-augmented generation (RAG) enhanced large language model (LLM), grounded in current EAU and AUA guidelines, to assess its effectiveness in providing guideline-concordant PSA screening recommendations compared to junior clinicians. Methods: A retrieval-augmented generation (RAG) pipeline was developed and used to process a series of 44 fictional case scenarios. Five junior clinicians were tasked to provide PSA testing recommendations for the same scenarios, in closed-book and open-book formats. Answers were compared for accuracy in a binomial fashion. Results: The RAG-LLM tool provided guideline-concordant recommendations in 95.5% of case scenarios, compared to junior clinicians, who were correct in 62.3% of scenarios in a closed-book format, and 74.1% of scenarios in an open book format. The difference was statistically significant for both closed-book (p <0.001) and open-book (p <0.001) formats. Conclusions: Use of RAG techniques allows LLMs to integrate complex guidelines into day-to-day medical decision-making. RAG-LLM tools in Urology have the capability to enhance clinical decision-making by providing guideline-concordant recommendations for PSA testing, potentially improving the consistency of healthcare delivery, reducing cognitive load on clinicians, and reducing unnecessary investigations and costs.
Background: Ozempic (semaglutide) has received widespread attention for its appetite-suppressing effects, prompting extensive off-label use for weight loss. Although gastrointestinal side effects are...
Background: Ozempic (semaglutide) has received widespread attention for its appetite-suppressing effects, prompting extensive off-label use for weight loss. Although gastrointestinal side effects are well documented, little is known about how patients evaluate the trade-off between perceived benefits and adverse effects, or how these evaluations influence treatment discontinuation. Objective: This study aimed to apply a novel infoveillance approach to examine patient-reported experiences with Ozempic when used off-label for weight loss, and to identify the factors most strongly associated with user satisfaction and treatment discontinuation. Methods: We analyzed 60 publicly available user reviews of Ozempic from Drugs.com, focusing on lived experiences of off-label use for weight loss. Reviews were examined through inductive thematic analysis, and emergent themes were quantitatively linked to user ratings of perceived efficacy and intent to continue or discontinue treatment. Results: While 80% of reviewers reported gastrointestinal complaints, these side effects had limited influence on satisfaction ratings or treatment continuation. Positive evaluations were driven by satisfaction with weight loss outcomes, whereas negative evaluations were associated with either disappointing weight outcomes or severe non-gastrointestinal side effects. Dissatisfaction with weight loss emerged as the strongest predictor of treatment discontinuation. Conclusions: This study introduces a novel application of infoveillance methods to capture patient attitudes toward off-label use of Ozempic. By analyzing unsolicited, real-world data, we identified key drivers of satisfaction and discontinuation that may be missed by traditional clinical approaches. These findings highlight the utility of online health forums as a rich and underutilized source of patient-centered insights to inform obesity treatment strategies, adherence interventions, and public health communication. Clinical Trial: N/A
Background: Cervical spondylosis (CS), a progressive degenerative disorder often leading to neurological impairment, remains poorly characterized in terms of its association with routine biochemical m...
Background: Cervical spondylosis (CS), a progressive degenerative disorder often leading to neurological impairment, remains poorly characterized in terms of its association with routine biochemical markers. This multicenter study aimed to identify novel CS subtypes through unsupervised clustering of clinical and laboratory biomarkers, subsequently developing a predictive model for postoperative recurrence. Objective: This study aimed to leverage unsupervised machine learning to delineate clinically actionable cervical spondylosis (CS) subtypes based on preoperative biomarker profiles, and further establish a predictive nomogram for postoperative recurrence risk stratification. By integrating clustering-driven phenotyping with supervised feature selection, we sought to bridge the gap between heterogeneous inflammatory signatures and surgical complexity, ultimately guiding subtype-specific therapeutic decision-making. Methods: In this study, 884 cervical spondylopathy patients who underwent Cervical spondylosis surgery were enrolled at the Department of Spine Osteopathology, the First Affiliated Hospital of Guangxi Medical University from June 2012 to June 2021. After screening, 715 patients were eventually included. After 7:3 training-validation split, k-means clustering stratified patients into subtypes based on 37 preoperative variables. Feature selection integrated LASSO regression and Random Forest algorithms, with subsequent nomogram construction via multivariable logistic regression. Model performance was evaluated through ROC analysis and calibration curves. Results: Unsupervised clustering delineated two subtypes with distinct profiles: Subtype 1 (n=215) exhibited milder inflammation (CRP: 2.1±1.1 mg/L) versus Subtype 2 (n=580) demonstrating marked systemic inflammation (CRP: 8.7±3.2 mg/L, p<0.001). The nomogram incorporating neutrophil count, lymphocyte levels, eosinophil percentage, basophils, and cystatin C showed exceptional discrimination (AUC=0.996, 95% CI: 0.985-1.000). Despite equivalent JOA improvement rates (>25% in both subtypes), Subtype 2 required more multilevel decompressions (35.5% vs. 17.5%, p<0.001). Conclusions: Our machine learning framework successfully identifies inflammatory-driven CS phenotypes with differential surgical complexity. The validated nomogram enables preoperative risk stratification, potentially guiding personalized rehabilitation strategies. Prospective validation across diverse populations is warranted to confirm clinical utility.
Background: Digital tools have been shown to play a role in promoting public health interventions. The recommendation that healthcare providers (HCPs) use vaccination-related mobile or web-based appli...
Background: Digital tools have been shown to play a role in promoting public health interventions. The recommendation that healthcare providers (HCPs) use vaccination-related mobile or web-based applications has contributed to improving vaccine awareness and acceptance among vaccine-eligible individuals. In the United States, the state of Texas, which has one of the lowest HPV vaccination rates, has seen a significant increase in HPV vaccination hesitancy during the COVID-19 pandemic. Objective: This study examined the association between changes in the HPV vaccine hesitancy observed by HCPs among patients in Texas and the promotion of vaccination-related applications at their healthcare facilities during the COVID-19 pandemic. Methods: A population-based cross-sectional survey administered to HCPs working in Texas between January and April 2021 by The University of Texas MD Anderson Cancer Center was used for this study. We described the observed changes in the HPV vaccination hesitancy reported by the HCPs among patients using descriptive statistics. We conducted a logistic regression analysis to examine the association between the decrease in the HPV vaccine hesitancy observed by HCPs among patients and the promotion of vaccination-related applications at their facility. Results: A total of 1283 HCPs completed the survey. Among the 730 HCPs who reported changes in the HPV vaccination hesitancy, 7.0% observed a decrease in the HPV vaccine hesitancy among patients during the COVID-19 pandemic. Among the 576 HCPs who responded to the survey question regarding vaccination-related applications, 20.7% reported that vaccination-related applications were promoted at their facilities. The respondents were predominantly aged 35-54 years (62.6%), females (77.3%), non-Hispanic Whites (50.5%), and working in group practice (39.6%). Compared to HCPs who did not promote vaccination-related applications, those who promoted vaccination-related applications at their healthcare facility had significantly higher odds of observing a decrease in HPV vaccine hesitancy among patients during the COVID-19 pandemic (Adjusted Odds Ratio [aOR]: 2.53, 95% CI: 1.18-5.41). Compared to HCPs who worked at university/teaching hospitals, HCPs working at Federally Qualified Health Clinics (aOR: 8.96, 95% CI: 1.74-46.10), public facilities (aOR: 12.53, 95% CI: 2.01-78.26) and Employed Physician Practices (aOR: 8.63, 95% CI: 1.37-54.39) had significantly higher odds to observe a decrease in the HPV vaccine hesitancy among patients during the COVID-19 pandemic. Conclusions: Our findings demonstrate the benefits of promoting vaccination-related applications at healthcare facilities in areas with high HPV vaccine hesitancy, such as Texas. Digital health interventions could provide a platform for promoting and increasing HPV vaccination uptake in a context of pandemic preparedness. Clinical Trial: N/A
Background: In Mexico, the maternal and child population continues to face a high burden of malnutrition, posing a persisted public health challenge. The healthcare system plays a crucial role, not on...
Background: In Mexico, the maternal and child population continues to face a high burden of malnutrition, posing a persisted public health challenge. The healthcare system plays a crucial role, not only in addressing existing cases but also in preventing and detecting malnutrition early. Mobile health (mHealth) technologies have the potential to strengthen maternal and child health services by improving the quality, accessibility, and timeliness of nutritional care. Objective: The aim was to develop and validate the design and content of a mobile application—CANMI (Calidad de la Atención Nutricional Materno Infantil, by its Spanish acronym) — to monitor the quality of maternal and child nutritional care in primary health care units in Mexico. Methods: The framework of the CANMI app was based on the 16 validated indicators designed to assess the quality of nutritional care during the preconception, pregnancy, postpartum, early childhood, and preschool stages. The application was developed for both iOS and Android systems using a user-centered design approach. Following development, a pilot usability study was conducted in a randomized sample of 18 primary health care units in Guanajuato, Mexico. Trained nutritionists implemented the app and collected usability data at the end of the initial usage period and again six weeks later. To further explore user experience, semi-structured online interviews were conducted to identify barriers, facilitators, and overall satisfaction with the app. Results: The CANMI app allows the systematic registration of key indicators to assess the quality of nutritional care in primary health care settings. Users described the app as simple, intuitive, and visually appealing. Overall usability was rated positively, with a mean score of 71.13 on the System Usability Scale (SUS) indicating good acceptability. The app’s offline functionality, streamlined interface, and efficiency in data collection were identified as key facilitators of use. Reported benefits included reduced time for data entry and perceived improvements in the quality of nutritional care. Identified barriers to integration included the need to use personal devices, user fatigue due to prolonged screen time, inconsistent clinical records, and limited time to incorporate the app into routine workflows. Importantly, the app encouraged promoted improvements in documentation practices and heightened awareness among health personnel regarding the precision and clarity of their nutritional recommendations. Conclusions: The CANMI app provides a feasible and effective solution for monitoring the quality of maternal and child nutritional care in primary health settings. Its high usability and offline capabilities make it particularly suitable for low-connectivity environments. Beyond facilitating data collection, the app contributed to improved clinical documentation practices and enhanced provider awareness of care quality. As such, the application can represent a promising digital tool to support the implementation evidence-based, user-centered strategies aimed at strengthening maternal and child health services in resource- limited contexts. Clinical Trial: The study protocol was reviewed and approved by the Research Ethics Committee of the Universidad Iberoamericana in Mexico City (172/2022).
Background: Large language models (LLMs) such as ChatGPT have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative p...
Background: Large language models (LLMs) such as ChatGPT have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear. Objective: To systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1mini, GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations. Methods: A 100-item examination dataset covering multiple-choice, short-answer, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineered conditions over five independent runs. Student cohort scores (n=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics 29 with paired t-tests and Cohen’s d (p<0.05). Results: Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1); final scores from 55.0% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (+10.6%, p<0.001) and GPT-4.0 (+3.2%, p=0.002) but yielded negligible gains for optimized variants (p=0.066–0.94). Optimized models matched or exceeded student performance on both exams. Conclusions: Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As LLMs mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of AI as a learning companion. Clinical Trial: IRB #CSMU-2024-075
Background: Lifestyle modification involving both patients and their significant others (dyads) is critical in the long-term management of chronic kidney disease (CKD). However, achieving sustained be...
Background: Lifestyle modification involving both patients and their significant others (dyads) is critical in the long-term management of chronic kidney disease (CKD). However, achieving sustained behavioral changes remains challenging. Digital interventions, particularly using widely adopted platforms like instant messaging apps, present a promising approach to support CKD dyads in lifestyle modification. Objective: This study aimed to develop, optimize, and test the usability of a digital dyadic empowerment platform named “Kidney Lifestyle” using the LINE instant messaging app. The platform was designed based on the Digital Dyadic Empowerment Framework (DDEF) to facilitate collaborative lifestyle modification among CKD dyads. Methods: We adopted a three-phase Agile-based iterative development cycle: (1) iterative development and trial use, (2) heuristic evaluation, and (3) usability testing. In Phase 1, platform prototype was co-developed with healthcare professionals and technical partners, then trialed by CKD dyads who provided qualitative and quantitative feedback on interface clarity, ease of use, acceptance, intention to continue usage, and overall satisfaction. In Phase 2, multidisciplinary experts independently conducted heuristic evaluations, rating platform compliance with Nielsen’s ten usability principles on a scale from -1 (does not comply) to 1 (complies), and provided suggestions for improvement. In Phase 3, experienced CKD dyads from Phase 1 performed six representative tasks using the platform. Task success rates, completion times, and operational errors were quantitatively recorded, and usability perceptions were assessed using the After-Scenario Questionnaire (ASQ; 1-7 scale) and the System Usability Scale (SUS; 0-100 scale). Results: Phase 1 results indicated high user acceptance and satisfaction (overall platform satisfaction: mean 4.1 out of 5) among 10 CKD dyads (19 individuals). Participants valued real-time interaction, convenience in monitoring health data, and educational resources. In Phase 2, five experts conducted heuristic evaluation, revealing high overall usability compliance (average compliance scores ranged from 89% to 93%), although issues related to navigation complexity and the need for enhanced interactive feedback were identified. In Phase 3, usability testing was conducted with 5 CKD dyads (10 individuals), showing high task success rates (60%-100%) and task completion times ranging from 1 to 5 minutes. However, navigation difficulties within the LINE Official Account were noted, resulting in a marginally acceptable average SUS score of 67.5. The ASQ indicated higher usability satisfaction for the extended App tasks (mean average = 5.64) compared to those within LINE (mean average = 3.87). Conclusions: The LINE-based digital dyadic empowerment platform “Kidney Lifestyle” (LINE ID: @509kgajt) demonstrated promising usability, high user engagement, and strong clinical potential for supporting lifestyle modification among CKD dyads. However, usability issues within the instant messaging interface highlight the importance of simplifying navigation pathways and enhancing user feedback. Future research should include a larger-scale feasibility trial and further optimization to enhance usability and clinical integration.
Background: Promoting individual resilience – i.e., maintaining or regaining mental health despite stressful circumstances – is often regarded as important endeavor to prevent mental illness. Howe...
Background: Promoting individual resilience – i.e., maintaining or regaining mental health despite stressful circumstances – is often regarded as important endeavor to prevent mental illness. However, digital resilience interventions designed to enhance mental health outcomes, including stress levels and self-perceived resilience, have yielded mixed results. Such heterogeneous effects reflect a variety of unsolved conceptual challenges in interventional resilience research. These range from grounding interventions in genuine resilience frameworks, using theory or targeting etiologically important resilience factors as intervention content, to a lack of knowledge about the mechanisms underlying effects, and employing techniques specifically developed to foster psychosocial resources. The web- and app-based resilience intervention RESIST was designed to address these challenges, mainly by utilizing both the Positive Appraisal Style Theory of Resilience as its theoretical foundation and interventional techniques from Strengths-based Cognitive Behavioral Therapy. Objective: The study’s primary aim was to evaluate the effectiveness of RESIST in a general working population as a means of universal prevention, relative to a waitlist control group. A secondary study aim was to explore the resilience factors of self-efficacy, optimism, perceived social support, and self-compassion the intervention targets as potential mediators of its effect on stress and self-perceived resilience. Methods: In total, 352 employees were randomly assigned to either a self-help version of RESIST or waitlist control group. Data were collected at baseline, post-intervention, and at 3- and 6-month (intervention group only) follow-up. The primary outcome was perceived stress, measured with the Perceived Stress Scale-10. Secondary outcomes included self-perceived resilience, the resilience factors targeted, and other mental and work-related health outcomes. Results: The intervention group reported significantly less stress than controls post-intervention (Δ=-3.14; d=-0.54, 95%CI -0.75 to -0.34, and P<.001) and at 3-month follow-up (Δ=-2.79; d=-0.47, 95%CI -0.71 to -0.22, and P=.002). These improvements in the intervention group were maintained at 6-month follow-up. Favorable between-group differences also were detected for self-perceived resilience and the resilience factors. Effects on other mental and work-related outcomes were mixed. Parallel mediation analyses revealed significant indirect effects of optimism (a2b2=-0.34, 95% CI -0.63 to -0.06) and self-compassion (a4b4=-0.66, 95% CI -1.15 to -0.17) on perceived stress, whereas indirect effects through self-efficacy and social support were not found. A similar pattern emerged for self-perceived resilience as mediation outcome. Conclusions: In a sample of employees experiencing heightened work-burden levels, RESIST was effective in reducing perceived stress, and increasing self-perceived resilience as well as the targeted resilience factors. Mediation analyses suggested that developing a positive future outlook and a self-compassionate attitude toward oneself may be key drivers to enhance resilience. Changing the quality of social relationships and strengthening the belief in one’s abilities may require more time, the involvement of others, or personal support from a mental health professional, such as an e-coach, to ensure sufficient learning opportunities. Clinical Trial: German Clinical Trials Register DRKS00017605; https://drks.de/search/de/trial/DRKS00017605
Background: The relationships between circadian rhythm syndrome, physical function, and muscle strength remain unclear. Objective: This study aimed to demonstrate the separate and combined deleterious...
Background: The relationships between circadian rhythm syndrome, physical function, and muscle strength remain unclear. Objective: This study aimed to demonstrate the separate and combined deleterious effects of solid fuel use and circadian rhythm syndrome on physical function and muscle strength. Methods: We used data from the China Health and Retirement Longitudinal Study cohort. The study population consisted of participants who underwent comprehensive assessments of metabolism, circadian rhythm, indoor air pollution, physical function, and muscle strength at the initial evaluation. Muscle strength was assessed using repeated grip strength measurements, and physical function was assessed using a composite score of muscle strength, physical performance, and balance. Circadian rhythm syndrome was derived from the five diagnostic components of metabolic syndrome, combined with sleep duration and depression. Logistic regression and linear mixed models were used to assess the relationships between solid fuel use, circadian rhythm syndrome, physical function and muscle strength. Furthermore, we analyzed the mediating role of circadian rhythm syndrome and its combined effect with solid fuel use on physical function and muscle strength. Results: A total of 7954 participants were included in the study, most of whom used solid fuels. solid fuel use was positively associated with circadian rhythm syndrome (OR: 1.078; 95% CI: 1.031–1.125). Circadian rhythm syndrome was found to be a significant risk factor for impairment of physical function and muscle strength. Participants who used solid fuels and had circadian rhythm syndrome needed to pay more attention to changes in physical function (β: -0.698, 95% CI: -0.813, -0.584) and muscle strength (β: -0.332, 95% CI: -0.387, -0.277). Circadian rhythm syndrome partially mediated the association between solid fuel use and physical function. Conclusions: Circadian rhythm syndrome exacerbates the adverse effects of solid fuel use on physical function and muscle strength. Fuel cleanliness and regular work and rest habits are crucial for the health of middle-aged and older adults. Clinical Trial: Not applicable
Background: Telemedicine has gained attention as a transformative solution to reduce healthcare disparities, especially in low-resource settings. In Iran, the full potential of telemedicine remains un...
Background: Telemedicine has gained attention as a transformative solution to reduce healthcare disparities, especially in low-resource settings. In Iran, the full potential of telemedicine remains unrealized due to fragmented policies, infrastructural limitations, and cultural resistance. Objective: This study aimed to explore the key challenges and strategies for implementing and expanding telemedicine within Iran’s healthcare system. Methods: A qualitative study using conventional content analysis was conducted based on semi-structured interviews with 20 experts from healthcare policy, academia, clinical practice, and digital health sectors. Data were analyzed using ATLAS.ti9 until theoretical saturation was reached. Results: Seven major thematic domains were identified: (1) Policy-making (roadmaps, legal frameworks), (2) Management (technical support, operational planning), (3) Legislation (liability and data security laws), (4) Technical infrastructure (hardware, software, connectivity), (5) Education (training and public awareness), (6) Financial provision (funding and insurance models), and (7) Information management (EHRs and cybersecurity). Key barriers included fragmented governance, limited rural internet infrastructure, resistance from clinicians, and insufficient funding. Strategic solutions focused on multisectoral collaboration, phased infrastructure investment, stakeholder education, and development of comprehensive legal and technical guidelines. Conclusions: A holistic, adaptive implementation approach is essential to institutionalize telemedicine in Iran. The findings provide practical insights for Iran and other resource-constrained countries seeking to scale equitable, sustainable telehealth services. Clinical Trial: Not applicable. Qualitative study.
Background: The Japanese National Medical Licensing Examination (NMLE) is mandatory for all medical graduates to become licensed physicians in Japan. Given the cultural emphasis on summative assessmen...
Background: The Japanese National Medical Licensing Examination (NMLE) is mandatory for all medical graduates to become licensed physicians in Japan. Given the cultural emphasis on summative assessment, the NMLE has had a significant impact on Japanese medical education. Although the NMLE Content Guidelines have been revised approximately every five years over the last two decades, there is an absence of objective literature analyzing how the actual exam itself has evolved. Objective: To provide a holistic view of the trends of the actual exam over time, this study used a combined rule-based and data-driven approach. We primarily focused on classifying the questions according to the perspectives outlined in the NMLE Content Guidelines, while complementing this approach with a natural language processing technique called topic modeling to identify latent topics. Methods: Publicly available NMLE data from 2001 to 2024 were collected. Six exam iterations (2,880 questions) were manually classified from three perspectives (Level, Content, and Taxonomy) based on pre-established rules derived from the guidelines. Temporal trends within each classification were evaluated using the Cochran-Armitage test. Additionally, topic modeling was conducted for all 24 exam iterations (11,540 questions) using the BERTopic framework. The temporal trends of each topic were traced using linear regression models of topic frequencies to identify topics growing in prominence. Results: In Level classification, the proportion of questions addressing common or emergent diseases increased from 60% to 76% (p < 0.001). In Content classification, the proportion of questions assessing knowledge of pathophysiology decreased from 52% to 33% (p < 0.001), whereas the proportion assessing practical knowledge of primary emergency care increased from 20% to 29% (p < 0.001). In Taxonomy classification, the proportion of questions that could be answered solely through simple recall of knowledge decreased from 51% to 30% (p < 0.001), while the proportion assessing advanced analytical skills, such as interpreting and evaluating the meaning of each answer choice according to the given context, increased from 4% to 19% (p < 0.001). Topic modeling identified 25 distinct topics, and 10 topics exhibited an increasing trend. Non-organ-specific topics with notable increases included “Comprehensive Clinical Questions,” “Accountability in Medical Practice and Patients’ Rights,” “Care, Daily Living Support, and Community Healthcare,” and “Infection Control and Safety Management in Basic Clinical Procedures.” Conclusions: This study identified significant shifts in the Japanese NMLE over the past two decades, suggesting that Japanese undergraduate medical education is evolving to place greater importance on practical problem-solving abilities than on rote memorization. This study also identified latent topics that showed an increase, possibly reflecting underlying social conditions. Clinical Trial: NA
Abstract
Background
Cholestatic liver disease (CLD) is associated with various hereditary and acquired liver diseases. However, its outcomes are often poor and lack proper treatment. Danning tablets...
Abstract
Background
Cholestatic liver disease (CLD) is associated with various hereditary and acquired liver diseases. However, its outcomes are often poor and lack proper treatment. Danning tablets (DNT) have been widely used to treat CLD and have achieved favorable outcomes in clinical practice. However, there is currently a lack of clinical trials verifying the efficacy of DNT. Therefore, we investigated whether DNT combined with ursodeoxycholic acid capsules (UDCA) is more effective than UDCA alone in treating CLD of the damp heat stagnation type.
Methods
This was an open-label, multicenter, randomized controlled trial (RCT). A total of 186 patients diagnosed with damp heat stagnation type CLD were enrolled. They were stratified according to the severity of CLD and randomly assigned in a 1:1 ratio to receive either UDCA treatment alone or combined with DNT for 3 months. The primary endpoints of the study were improvement in clinical symptoms and liver function after treatment. The secondary endpoints included liver stiffness, survival status during hospitalization and follow-up, and adverse events.
Conclusion
This RCT will provide high-quality evidence to demonstrate the potential benefit of DNT combined with UDCA in patients with CLD of the damp heat stagnation type.
Background: Chronic obstructive pulmonary disease (COPD) is a globally prevalent respiratory disorder characterized by progressive airflow limitation, leading to substantial disability and being among...
Background: Chronic obstructive pulmonary disease (COPD) is a globally prevalent respiratory disorder characterized by progressive airflow limitation, leading to substantial disability and being among the leading causes of chronic morbidity and mortality worldwide. However, there remains a notable lack of convenient and effective home-based intervention programs to support COPD self-management. Objective: This study aimed to develop and evaluate the usability of "COPD CarePro," a WeChat-based mini-program designed to improve home-based self-management for COPD patients. Methods: Utilizing a mixed-methods design, we first conducted semi-structured interviews on 15 COPD patients and their caregivers following the consolidated criteria for reporting qualitative research (COREQ) guidelines to identify user needs. A multidisciplinary team then co-developed a five-module intervention featuring: 1) secure authentication, 2) symptom monitoring, 3) health diary, 4) multimedia education library, and 5) clinician communication portal. A two-stage usability assessment was implemented: (i) PSSUQ testing (n=10) uncovered navigational and functional pain points for prototype optimization, followed by (ii) SUS administration (n=52) to quantify usability of the production-ready version. Results: The final prototype demonstrated good usability with mean SUS score of 76.15±6.58, meeting the acceptability threshold (>70). Key functional outcomes included successful implementation of real-time symptom monitoring using CAT/mMRC thresholds for exacerbation alerts, and high engagement with video-based pulmonary rehabilitation guidance. Conclusions: COPD CarePro represents a clinically relevant health solution that successfully bridges critical gaps in COPD homecare through technology-enabled self-management support. The development process highlights the value of participatory design in creating patient-centered digital health interventions. Clinical Trial: NO
Background: Exergaming refers to video gaming with/without virtual reality that requires the use of physical activity during gameplay, and has been utilized as an emerging type of physical activity in...
Background: Exergaming refers to video gaming with/without virtual reality that requires the use of physical activity during gameplay, and has been utilized as an emerging type of physical activity in improving older adults’ physical and mental health. Exergaming can also be considered as esports when a competitive and interactive element is embedded in the gameplay. To date, the impact of exergaming-based esports on older adults’ health and well-being has been less investigated. Objective: This study aims to examine the effectiveness of an exergaming-based esports intervention program in promoting older adults’ physical, psychological, and cognitive health outcomes in Hong Kong. Methods: A total of 54 older adults were recruited and 48 (male = 12; female = 36) were retained for data analysis (six did not attend the post-test). All participants were allocated to either an esports group (EG = 24) or a control group (CG = 24). EG participants were invited to participate in an eight-week exergaming-based esports intervention program consisting of 16 training sessions to learn and play the Nintendo Switch™ Fitness Boxing game. A fitness boxing competition was embedded in the final three sessions. CG participants, in contrast, were instructed to maintain their normal daily activities. Outcome measures including the Senior Fitness Test, the University of California, Los Angeles (UCLA) Loneliness Scale (ULS-8), the Chinese version of the Physical Activity Enjoyment Scale (PACES), the Number Comparison Test (NCT), the Trail Making Test (TMT), and the Short Form-36 (SF-36) Health Survey were used to assess physical, psychological, and cognitive conditions. A repeated-measures ANCOVA was conducted, controlling for baseline values and demographic covariates. Results: The results showed that after the 8-week intervention, EG participants had better lower body strength, higher aerobic endurance, higher enjoyment level, and higher cognitive functioning than those in the CG. Conclusions: This study provides a theoretical contribution by filling the research gap regarding the beneficial effects of exergaming-based esports in enhancing older adults’ health in Hong Kong. Game designers are encouraged to design game types with competitive and interactive elements for older adults to play, thereby promoting their emotional and cognitive well-being. Clinical Trial: The trial design was registered on the Chinese Clinical Trial Registry (ChiCTR) on 13 November 2024 (TRN: ChiCTR2400092284). This study was retrospectively registered, as registration took place after the first participant was enrolled.
Background: The rise of AI and accessible audio equipment has led to a proliferation of recorded conversation transcripts datasets across various fields. However, automatic mass recording and transcri...
Background: The rise of AI and accessible audio equipment has led to a proliferation of recorded conversation transcripts datasets across various fields. However, automatic mass recording and transcription often produce noisy, unstructured data. First, these datasets naturally include unintended recordings, such as hallway conversations, background noise and media (e.g., TV programs, radio, phone calls). Second, automatic speech recognition (ASR) and speaker diarization errors can result in misidentified words, speaker misattributions, and other transcription inaccuracies. As a result, large conversational transcript datasets require careful preprocessing and filtering to ensure their research utility. This challenge is particularly relevant in behavioral health contexts (e.g., therapy, treatment, counselling): while these transcripts offer valuable insights into patient-provider interactions, therapeutic techniques, and client progress, they must accurately represent the conversations to support meaningful research. Objective: We present a framework for preprocessing and filtering large datasets of conversational transcripts and apply it to a dataset of behavioral health transcripts from community mental health clinics across the United States. Within this framework we explore tools to efficiently filter non-sessions – transcripts of recordings in these clinics that do not reflect a behavioral treatment session but instead capture unrelated conversations or background noise. Methods: Our framework integrates basic feature extraction, human annotation, and advanced applications of large language models (LLMs). We begin by mapping transcription errors and assessing the distribution of sessions and non-sessions. Next, we identify key features to analyze how outliers help in characterizing the type of transcript. Notably, we use LLM perplexity as a measure of comprehensibility to assess transcript noise levels. Finally, we use zero-shot LLM prompting to classify transcripts as sessions or non-sessions, validating LLM decisions against expert annotations. Throughout, we prioritize data security by selecting tools that preserve anonymity and minimize the risk of data breaches. Results: Our findings demonstrated that basic statistical outliers, such as speaking rate, are associated with transcription errors and are observed more frequently in non-sessions versus sessions. Specifically, LLM perplexity can flag fragmented and non-verbal segments and is generally lower in sessions (permutation test mean difference = -258, p<0.05), thus can serve as a filtering aiding tool. Additionally, LLM algorithms have shown an ability to distinguish between sessions and non-sessions with high validity (κ=0.71), while also capturing the nature of the meeting. Conclusions: This study’s hybrid approach effectively characterizes errors, evaluates content, and distinguishes different text types within unstructured conversational datasets. It provides a foundation for research on conversational data, providing key methods and practical guidelines that serve as crucial first steps in ensuring data quality and usability, particularly in the context of mental health sessions. We highlight the importance of integrating clinical experts with AI tools while prioritizing data security throughout the process.
Background: Artificial intelligence is increasingly integrated into clinical practice to enhance decision-making, diagnosis, and patient care. However, the diversity and complexity of AI-based clinica...
Background: Artificial intelligence is increasingly integrated into clinical practice to enhance decision-making, diagnosis, and patient care. However, the diversity and complexity of AI-based clinical decision support systems demand rigorous methodological and ethical evaluation to ensure their safety, effectiveness, and equity in real-world healthcare settings. Objective: To identify, characterize, and critically analyze existing evaluation systems and methodological frameworks used to assess AI models in clinical practice, with a focus on technical performance, clinical applicability, and bioethical considerations. Methods: We conducted a systematic review following PRISMA guidelines. We searched PubMed/MEDLINE, Embase, Scopus, and Web of Science from January 2013 to April 2024. We included studies describing evaluation frameworks or systems designed to assess AI-based clinical decision support systems in real-world clinical contexts. Data extraction included methodological characteristics, validation approaches, performance metrics, and ethical dimensions. The included frameworks were mapped and analyzed across five domains: validation strategy, reporting standards, clinical applicability, healthcare system integration, and ethical criteria. Results: A total of 24 articles were included. Most frameworks emphasized technical validation and performance metrics (e.g., accuracy, AUC), with fewer addressing prospective or external validation. Only a minority incorporated real-world implementation strategies or ethical dimensions such as transparency, equity, or patient autonomy. Regulatory guidance (e.g., from FDA or EU AI Act) was inconsistently referenced. Common gaps included lack of standardized outcome measures and insufficient stakeholder engagement, particularly from patients and healthcare providers. Conclusions: Current evaluation systems for AI models in clinical practice are heterogeneous and often incomplete, with limited emphasis on ethical and health systems integration. There is a critical need for standardized, multidimensional frameworks that encompass technical rigor, clinical relevance, and ethical accountability. A comprehensive and integrative approach is essential to ensure the safe, effective, and equitable deployment of AI in healthcare. Clinical Trial: PROSPERO ID 1019640
Background: Catheter-associated urinary tract infections (CAUTIs) are among the most common healthcare-associated infections, significantly affecting patient outcomes and healthcare costs. Nurses play...
Background: Catheter-associated urinary tract infections (CAUTIs) are among the most common healthcare-associated infections, significantly affecting patient outcomes and healthcare costs. Nurses play a pivotal role in CAUTI prevention due to their direct involvement in catheter care. However, evidence suggests notable gaps in knowledge and practice regarding CAUTI among nurses. Objective: To assess the impact of an educational intervention on nurses’ knowledge and attitudes toward the prevention of catheter-associated urinary tract infections. Methods: A quasi-experimental pre-post design was employed in a governmental tertiary hospital over four months. A total of 90 registered nurses from medical, surgical, and intensive care units participated. The intervention comprised simulation and case study-based training sessions. Knowledge and attitude were measured using a structured questionnaire before and after the intervention. Statistical analyses included paired t-tests, correlation analysis, and ANOVA using SPSS v16. Results: The mean overall knowledge score significantly improved from 19.2±5 pre-intervention to 31.5±4.8 post-intervention (p<0.001). All knowledge domains showed significant gains except for the policy and guidelines domain (p=0.12). Satisfaction with learning (r=0.82) and self-confidence (r=0.75) showed strong positive correlations with knowledge gains. Years of experience (r=0.35, p=0.005) and educational level (p=0.008) were significantly associated with knowledge improvement, while age, gender, nationality, and unit of work were not. Conclusions: Educational interventions using simulation and case-based strategies significantly enhanced nurses’ knowledge and confidence in CAUTI prevention. Continuous professional education is essential to improve clinical practice and reduce infection rates in healthcare settings.
Background: Sialorrhea, or excessive salivation, can be a major problem during a number of dental operations. It can impair vision, increase chair time, and perhaps jeopardize the effectiveness of tre...
Background: Sialorrhea, or excessive salivation, can be a major problem during a number of dental operations. It can impair vision, increase chair time, and perhaps jeopardize the effectiveness of treatments. To treat this illness, pharmacological treatments such as atropine sulphate are frequently employed. However, there hasn't been much research done on natural substitutes like Acacia catechu for this function. Objective: To compare and evaluate the effectiveness of Atropine sulphate and Acacia catechu in reducing salivary secretion in patients undergoing restorative dental procedures. Methods: A randomized controlled trial will be conducted involving 160 participants undergoing restorative dental treatments. Participants will be randomly assigned to receive either Atropine Sulphate or Acacia Catechu. Salivary flow will be measured pre- and post-intervention. Statistical analysis will be performed using paired t-tests and ANOVA. This protocol outlines the methodology and planned analyses. Results: It is anticipated that both interventions will reduce salivary flow, with atropine sulphate demonstrating greater efficacy but more side effects, and Acacia catechu offering a better side-effect profile and higher patient acceptability. Conclusions: The trial will provide comparative evidence on the efficacy and safety of a pharmacological and a natural anticholinergic agent. If Acacia catechu proves effective, it may offer a viable, better-tolerated alternative for salivary control in clinical dental settings. Clinical Trial: This study has been submitted to the Clinical Trials Registry – India (CTRI) and is currently under review (Acknowledgment Number: REF/2025/04/104969). The trial will be updated with the final registration number once approved.
Background: Outdoor play has always been a fundamental part of childhood. Children’s participation in outdoor play connects them to nature, the land and supports their role in the natural world. Ear...
Background: Outdoor play has always been a fundamental part of childhood. Children’s participation in outdoor play connects them to nature, the land and supports their role in the natural world. Early learning and child care (ELCC) centres provide important opportunities for outdoor play, however, barriers towards the provision of outdoor play opportunities exist, including educator attitudes, existing policies and procedures, outdoor space limitations and adverse weather conditions. Objective: The PROmoting Early Childhood Outside (PRO-ECO) 2.0 study is a community-based research partnership with Indigenous Knowledge Keepers and Elders, Indigenous and early childhood organizations, early childhood education faculty, ELCC centres and families, aiming to expand outdoor play in ELCC centres. This paper provides a detailed overview of the community-based design process, guided by the 5 R’s – Respect, Relevance, Responsibility, Reciprocity and Relationship – and the resulting study protocol for the mixed methods wait-list control cluster randomized trial. Methods: The PRO-ECO program and study protocol are implemented in partnership with 10 ELCC centres delivering licensed full-day, year-round care to children aged 2.5-6 years in rural and urban areas of British Columbia, Canada. The PRO-ECO program includes four components to address the common barriers to outdoor play in ELCC settings. Primary outcome measures include the proportion and diversity of observed nature play behaviour during dedicated outdoor times at ELCC centres as measured through observational behaviour mapping. Secondary outcomes include changes in educator attitudes, quality of ELCC outdoor play space, and children’s perspectives of their experiences at ELCC centres. Outcome data are collected at baseline, and 6-months and 12-months post-baseline. The community’s perspectives (educators, children, families) of the project are assessed qualitatively to understand the acceptability and effect of the PRO-ECO program. Mixed-effect models will test the effect of the PRO-ECO program on quantitative outcomes. Qualitative data will support interpretation of quantitative findings and provide evidence on project acceptability. Results: Participant recruitment for this study began in August 2023 and data collection was completed at participating ELCC centres in March 2025. A total of 227 children, 90 early childhood educators and 40 family members were recruited to participate in this study. Conclusions: The PRO-ECO 2.0 study ruses a rigorous and robust experimental design within a community-based research project. The 5 R’s approach grounded our work in shared values, disrupting traditional academic power relations and weaving together Indigenous and Western worldviews in the context of academic research. Clinical Trial: NCT05626595
Background: Most older Americans have not saved enough to cover long-term care costs. Medicaid–a public healthcare program for low income individuals–can help Americans with qualifying care needs...
Background: Most older Americans have not saved enough to cover long-term care costs. Medicaid–a public healthcare program for low income individuals–can help Americans with qualifying care needs pay for assistance in a nursing home or for services in the home and community. Determining financial eligibility for Medicaid is complicated and the application process is often managed by family caregivers with limited knowledge of Medicaid programs. Objective: A one-stop solution is needed to help family caregivers plan for the cost of long-term care services and learn about getting help paying for services through Medicaid. We developed a web application that (1) educates informal caregivers about Medicaid programs and eligibility criteria, (2) informs them about the cost of home and institutional care in their local area with and without Medicaid coverage, and (3) uses a custom algorithm to provide personalized financial eligibility information based on the care recipient’s income, assets, and monthly spending. Methods: We first interviewed aging services providers and informal family caregivers, then developed a web application that was refined based on user experience interviews with English and Spanish-speaking caregivers. In the final validation phase, asynchronous usability sessions were recorded with 109 informal caregivers. Participants completed a series of tasks where they viewed animated Medicaid “explainer” videos, input financial information enabling the application to determine the care recipient’s eligibility for Medicaid, used a care cost calculator to estimate the regional cost of home and institutional care services, and completed a Medicaid knowledge quiz before and after using the website. Results: After engaging with the website and watching the videos, scores on a Medicaid knowledge quiz increased by 61.2% (t=12.9, p<.001). Participants found it easy to enter the care recipient’s financial information to determine Medicaid eligibility (mean=5.9 (out of 7), SD= 1.34), and perceived the care cost calculator as very helpful (mean=6.3 (out of 7), SD=1.19). The website received a very high System Usability Scale rating of 88.3 out of 100 (SD= 13.05). Caregivers verbalized wanting more education on complex financial concepts that impact Medicaid eligibility and asset preservation. Conclusions: A comprehensive Medicaid planning website can significantly improve caregivers’ knowledge of Medicaid and provide them with a personalized roadmap for accessing care services. The custom algorithm powering the Medicaid eligibility determination could be further refined to account for state-based exceptions. This application may reduce caregiver burden and help support the long-term care planning process.
Background: Selecting suitable healthcare professionals remains a challenge for patients due to information asymmetry and limited guidance provided by online consultation platforms. Existing doctor re...
Background: Selecting suitable healthcare professionals remains a challenge for patients due to information asymmetry and limited guidance provided by online consultation platforms. Existing doctor recommendation systems often overlook the importance of "patient expectations" in assessing medical service quality, leading to suboptimal matching. Objective: To address this gap, we propose a personalized doctor recommendation system that integrates patient preferences and doctor profiles using the SERVQUAL framework. Methods: This system builds comprehensive bilateral profiles through feature extraction and sentiment analysis of user data from an online health community. Key dimensions, including tangibility, reliability, responsiveness, empathy, and assurance, are operationalized alongside additional factors like price and disease specialization.
A matching algorithm is developed to align patient expectations with doctor service attributes systematically. Results: Evaluation through scenario-based simulations demonstrated high match accuracy and high participant satisfaction. Conclusions: This approach enhances recommendation accuracy, reduces decision-making complexity, and improves user experiences on online healthcare platforms, optimizing resource allocation and patient outcomes.
Background: The World Health Organization (WHO) reported in 2020 that approximately 50% of all mental health disorders in adolescents manifest before the age of 14. However, the literature on mental w...
Background: The World Health Organization (WHO) reported in 2020 that approximately 50% of all mental health disorders in adolescents manifest before the age of 14. However, the literature on mental well-being and programmes designed and implemented by nurses for adolescents in low- middle-income countries (LMICs) is limited. This scoping review explores the development and implementation of psychosocial support programmes targeting high school learners in LMICs. Objective: This prospective scoping review will explore how psychosocial support programmes have been developed and implemented for high school learners from LMICS. Methods: Using the Joanna Briggs Institute (JBI) scoping review framework, primary research articles will be identified through systematic searches of ERIC, MEDLINE, Science Direct, PubMed, and PsycINFO. Grey literature will also be sourced from Google Scholar. Two independent reviewers will apply pre-determined inclusion criteria to select studies. Data will be charted, analyzed narratively, and presented in tables and figures Results: Data collection started in January 2024. Results yet to be published. Conclusions: This scoping review will synthesize evidence on psychosocial support programmes in LMICS and guide the development of targeted interventions to address the mental health needs of high school learners. Clinical Trial: Additional supplemental material is available from the Open Science repository.
Background: The positive effects of multidisciplinary rehabilitation programmes tend to fade over time due to low long-term patient adherence. Objective: We aimed to evaluate the impact of a smartphon...
Background: The positive effects of multidisciplinary rehabilitation programmes tend to fade over time due to low long-term patient adherence. Objective: We aimed to evaluate the impact of a smartphone application on adherence to an exercise programme for people with chronic low back pain (CLBP) at 6 months. Methods: One hundred and ten people with CLBP were included and randomised into two groups: 54 in the intervention group (IG) received education on the use of the application in addition to usual care (a 3-week multidisciplinary rehabilitation programme with self-management education) and 56 in the control group (CG) who received only usual care. The Exercise Adherence Rating Scale (EARS) was the primary outcome. Secondary outcomes were pain (Numeric Rating Scale), disability (Oswestry Disability Index), barriers and facilitators to performing physical activity (Evaluation of Physical Activity Perception), physical capacity (battery of tests) and qualitative adherence (correctness of exercise execution). Statistical analyses were performed according to the intention-to-treat principle. A linear mixed model compared the primary endpoint between groups at 6 months. Results: 71/110 participants were evaluated at 6 months. Adherence did not differ between groups, nor did pain, disability or barriers and facilitators to physical activity, except for the motivation criterion. Physical capacity test results (6MWT, cycle ergometer, Shirado-Ito, plank) and qualitative adherence differed between groups in favour of the IG. All outcomes improved from baseline to 6 months in the IG but not in the CG. Conclusions: The smartphone application did not impact adherence to an exercise programme at 6 months in individuals with CLBP. Similar results were found for pain and function. Nevertheless, the application could be a useful self-management tool in view of the positive effects on pain, function, physical capacity and qualitative adherence. Clinical Trial: ClinicalTrials.gov: NCT04264949
https://clinicaltrials.gov/study/NCT04264949
Background: Unskilled birth delivery significantly contributes to maternal and neonatal mortality in Sub-Saharan Africa, especially Nigeria, due to cultural beliefs, poverty, poor health access, and w...
Background: Unskilled birth delivery significantly contributes to maternal and neonatal mortality in Sub-Saharan Africa, especially Nigeria, due to cultural beliefs, poverty, poor health access, and weak policies. Despite efforts to promote skilled attendance, many women still use traditional birth attendants (TBAs) and home deliveries. This study explores the socio-demographic, cultural, and systemic factors driving this trend, offering evidence for better policies and health interventions. Objective: This study examined the socio-demographic and socio-cultural barriers to the utilization of skilled delivery services among women of reproductive age in Nigeria. Methods: A cross-sectional design utilizing both quantitative surveys and qualitative interviews was employed. The study involved 1,200 expectant and recently delivered women across urban, semi-urban, and rural regions in Nigeria. Data on socio-demographics, beliefs, access factors, and healthcare usage were collected. Policy documents and intervention records were reviewed, while focus groups provided depth to cultural and systemic themes. Descriptive and inferential statistics were applied using SPSS, and thematic analysis was used for qualitative data. A literature triangulation approach was used to validate findings with existing research. Results: The study revealed that low maternal education, poverty, and rural residence strongly predicted unskilled delivery service usage. Cultural norms that regard childbirth as a domestic or spiritual event influenced avoidance of hospitals. Access barriers included poor transport, cost, and distrust in formal healthcare. Geographic inequality was evident, with rural regions lacking health infrastructure. Policy review showed limited reach and weak enforcement of maternal care programs. However, when community-based midwives or mobile clinics were available, skilled birth attendance improved significantly. Conclusions: The persistence of unskilled deliveries is a multifaceted issue driven by intersecting socio-cultural, economic, geographic, and institutional factors. Despite policy efforts, gaps remain in cultural sensitivity, resource allocation, and infrastructure coverage. To address maternal health effectively, interventions must be locally adapted, multidimensional, and equity-focused. To address unskilled delivery use, maternal health education should leverage community programs with local languages and cultural context. Rural healthcare infrastructure must expand via mobile clinics and trained midwives to improve access. Skilled delivery costs should be subsidized or covered by insurance to remove financial barriers. Traditional birth attendants could be trained and integrated into the formal health system under supervision. Finally, maternal health policies require regular review, adequate funding, and strict monitoring to ensure impact. These steps are vital to reducing maternal mortality in Nigeria and Sub-Saharan Africa. Unskilled delivery service utilization represents a critical barrier to maternal and neonatal health improvements in Nigeria and Sub-Saharan Africa. Addressing this issue through targeted socio-cultural, structural, and policy interventions is essential to reduce preventable maternal deaths and achieve Sustainable Development Goal 3 on maternal health.
Background: Mental health disorders, particularly anxiety, constitute a significant burden in Australia, affecting 1 in 5 individuals annually. While telehealth has emerged as a strategic solution to...
Background: Mental health disorders, particularly anxiety, constitute a significant burden in Australia, affecting 1 in 5 individuals annually. While telehealth has emerged as a strategic solution to expand access, evidence on its economic impact within the Australian context remains limited. Objective: This study evaluates the cost-effectiveness and budget impact of telehealth-delivered psychological services by clinical psychologists compared to in-person care and no treatment among adults with mental health disorders in Australia. Methods: A retrospective analysis was conducted using Medicare Benefits Schedule data from April 2020 to June 2022. A Markov cohort model simulated health transitions over a five-year horizon, incorporating healthcare payer and societal perspectives. Health outcomes were measured in quality-adjusted life years (QALYs). Results: Telehealth services were cost-effective, yielding an ICER of AUD $5,395/QALY compared to no treatment and dominating in-person services from a societal perspective due to reduced indirect costs. The estimated national budget impact was AUD $1.40 per member per month. Sensitivity analyses confirmed the model’s robustness. Conclusions: Telehealth for mental health is both cost-effective and cost-saving in Australia. These findings support the continued funding and integration of telehealth into national mental health policy to improve access and equity.
Background: With technological advancement, the internet has become the most convenient and vital source of information for many young people, most especially with the influx of mobile health (mHealth...
Background: With technological advancement, the internet has become the most convenient and vital source of information for many young people, most especially with the influx of mobile health (mHealth) platforms, which prevent many hurdles associated with young people’s access to SRH information and services. Hence, there is a gradual drift from in-person and constant visits to health facilities to a more convenient and easy way with just a tap. Again, unpleasant experiences such as attitudes of healthcare providers, proximity of health facilities, and cost implications further deter youth access to SRH in Ghana. In the bid to surf towards the new wave, novel approaches such as digital platforms, among which included the ‘You Must Know App’ by the Ghana Health Service (GHS), the Flow App, and many other digital tools, were introduced to help address this menace and facilitate access to quality and inclusive SRH among the Ghanaian youth. Objective: The study assesses the viability of digital tools as a means for sexual reproductive health (SRH) access among young people in the Greater Accra Region. Methods: A cross-sectional descriptive design was employed in the study. Following informed consent, a structured questionnaire was administered through an online platform to obtain information on socio-demographic and background characteristics, knowledge of available digital health platforms, sources of sexual reproductive health and any health-related information, and services, including participants' level of knowledge of mHealth and challenges to access. Results: The study found that 53.5% of participants had never used any digital health. Specifically, 66.8% indicated zero knowledge and awareness of the ‘You Must Know App’, and had they used any mobile applications to access healthcare before. On the other hand, 43.1% stated that they have ever used mobile health applications in their life, while 3.5% of the respondents did not know if they have ever used the applications or not. The results further suggested that despite technological advances in Ghana and parts of Africa, there remains a significantly low level of knowledge of mHealth tools and thus, the need for sensitization about mHealth Platforms, as a majority of the Ghanaian youth may not be aware of these applications at all. Conclusions: This study brought to light the emerging approaches to accessing sexual reproductive health information and services by the youth in Ghana, particularly with the onset of technology. However, it also revealed the emerging gaps or challenges faced by young people when using the available mobile health (mHealth) platforms as a source of SRH information in Ghana, particularly the You Must Know App, and suggested key innovations that can be implemented to enhance young people’s experiences with mhealth tools in Ghana.
Background: Colorectal cancer is the third most common type of cancer worldwide, and accurate segmentation of colorectal polyps plays a crucial role in early screening and treatment. Existing polyp se...
Background: Colorectal cancer is the third most common type of cancer worldwide, and accurate segmentation of colorectal polyps plays a crucial role in early screening and treatment. Existing polyp segmentation methods are predominantly based on Convolutional Neural Network (CNN) and Transformer. However, CNN suffer from limited receptive fields, leading to insufficient global context modeling, while Transformer, despite their attention mechanisms offering some insight into model focus, still face challenges in overall interpretability. Objective: To address these limitations, this paper proposes a novel multi-scale polyp segmentation network—MSSNet (Multi-Scale Segmentation Network). The proposed model integrates the local detail sensitivity of CNN with the global semantic perception capabilities of Transformer to tackle common challenges in polyp segmentation, such as scale heterogeneity, ambiguous boundaries, and the dominance of small-sized targets, while also enhancing model interpretability. Methods: MSSNet employs Pyramid Vision Transformer v2 (PVT v2) as the encoder to hierarchically extract multi-scale features. A newly designed Multi-Scale Attention Decoder (MSAD) is introduced to improve the recognition of blurry edges and small polyps through hierarchical feature fusion and an enhanced attention mechanism. The encoder and decoder are connected via a Multi-Scale Modulation (MSM) module, which effectively enhances cross-level feature interaction. In addition, a fusion loss function with adaptive weighting is proposed to emphasize critical regions and enhance the model's sensitivity to local details. To mitigate the “black-box” nature of deep learning models in medical image segmentation, we introduce a novel interpretability method—Grad-SAM (Gradient Segmentation Activation Map), which provides explicit visual attribution of segmentation results, thereby improving decision transparency. Results: Extensive experiments on four benchmark polyp datasets—EndoScene, Kvasir, Piccolo, and CVC-ClinicDB—demonstrate that MSSNet achieves Dice scores of 93.32%, 91.56%, 87.03%, and 84.11%, respectively, with a computational cost of 4.62 GFLOPs, outperforming existing state-of-the-art methods in both accuracy and efficienc. Conclusions: These results suggest that MSSNet holds great promise for real-world clinical decision support in colorectal polyp diagnosis.
Background: Intensive longitudinal designs support temporally granular study of processes making methods like ecological momentary assessment (EMA) increasingly common in medical and behavioral scienc...
Background: Intensive longitudinal designs support temporally granular study of processes making methods like ecological momentary assessment (EMA) increasingly common in medical and behavioral science. However, the repetitive and intensive measurement strategies associated with these designs increase participant burden which limits the breadth and precision of EMA surveys. This is particularly problematic for complex clinical phenomena, such as suicide risk, which research has shown is multidimensional and fluctuates over narrow time intervals (e.g., hours). To overcome this limitation, we proposed the Computerized Adaptive Test for Suicide Risk Pathways (CAT-SRP) which supports the simultaneous assessment of multiple empirically informed risk domains and facilitate personalized survey content. Objective: The objective of this study is to develop, calibrate, and pilot the first multidimensional computerized adaptive test for suicidal thoughts and related psychosocial risk factors in intensive longitudinal designs like EMA. Methods: A web-based assessment platform was developed to adaptively administer the CAT-SRP. CAT-SRP items were modified from existing validated instruments to support administration in intensive longitudinal designs. The item bank was developed in line with major ideation-to-action theories of suicide and consultation with experts outside the study team. Exploratory item factor analysis was used to identify dimensionality of the item bank. Item parameters were calibrated using a multidimensional graded response model in a large cross-sectional community sample (N = 1759, 36.33% with a history of suicidal thoughts). Following calibration, the CAT-SRP was evaluated in an EMA study of participants with a past month history of suicidal thoughts (N = 29 across 2,134 observations). Adaptive testing utilized D-optimal item selection, a dual variable-length stopping criterion, and MAP scoring. Descriptive statistics and mixed effects models were used to examine CAT-SRP performance (e.g., efficiency, survey overlap) and relationships among CAT-SRP domain scores. Results: The calibration study identified two suicidal thought domains (active and passive thoughts) and twelve risk factor domains: humiliation, loneliness, anger, pain, defeat, impulsivity/negative urgency, entrapment, distress tolerance, perceived burdensomeness, thwarted belongingness, aggression, and a hope/method factor. Domain information was highest between average to high levels of domain scores. Study 2 results suggested that the CAT-SRP 1) administered surveys with low to moderate item overlap, 2) incurred low participant burden, and 3) may improve near-term prediction of suicidal thoughts relative to traditional EMA measurement. Most EMA surveys reached the maximum length, 50 questions, highlighting a need to refine selection and stopping rules. Conclusions: The CAT-SRP effectively personalized EMA survey content to respondents which reduces the repetitiveness and perceived burden of intensive longitudinal research designs. Continuous domain scores from MCAT also provided more nuanced measurement, compared to traditional approaches that struggle with zero-inflation in EMA, and appeared to produce stronger predictive relationships. Overall, the CAT-SRP demonstrates strong methodological advantages to utilizing CAT for intensive longitudinal data collection.
Background: Diffusion models have shown great promise in generating high-fidelity images, particularly in computer vision. However, their application to diverse medical imaging modalities remains unde...
Background: Diffusion models have shown great promise in generating high-fidelity images, particularly in computer vision. However, their application to diverse medical imaging modalities remains underexplored. Each MRI modality presents unique structural and textural features, making it a challenging but important testbed for generative model evaluation. Objective: This study aims to evaluate the performance of diffusion models across a range of MRI modalities, including brain, chromatin, lung, kidney, spine, and heart. The goal is to understand the ability of diffusion models to capture and replicate modality-specific details critical for medical interpretation. Methods: We conducted a series of experiments using publicly available and synthetic MRI datasets representing six distinct modalities. For each modality, a dedicated diffusion model was trained independently to assess its capacity for high-quality image generation Results: Visual inspection confirmed that modality-specific anatomical features were well-preserved in generated outputs. Conclusions: Diffusion models can effectively learn and replicate the unique characteristics of various MRI modalities, though performance varies depending on data complexity and quality.
Background: The widespread use of digital technologies—especially the internet and social media—has raised growing concerns about their impact on mental health. While self-regulation has been prop...
Background: The widespread use of digital technologies—especially the internet and social media—has raised growing concerns about their impact on mental health. While self-regulation has been proposed as a protective factor, little is known about how distinct psychological profiles based on self-regulatory and technology use patterns relate to well-being. Person-centered approaches such as Latent Profile Analysis (LPA) may offer deeper insights, particularly in underrepresented populations. Objective: This study aimed to identify latent psychological profiles based on self-regulation, nomophobia, and problematic use of the internet and social media and to examine their association with mental health outcomes in a Colombian sample. Additionally, the predictive roles of age and gender on class membership were explored.
Methods Methods: 453 participants aged 12 to 57 years (M = 21.03, SD = 8.41; 57% female) completed validated measures of self-regulation, nomophobia, internet use, social media use, and psychological health (GHQ-12). Latent Profile Analysis was conducted using standardized scores of continuous variables. Model fit was assessed using BIC, entropy, and BLRT. Differences in psychological health across latent classes were examined through ANOVA and regression models. A multinomial logistic regression tested the predictive value of age and gender on class membership. Results: The optimal solution revealed four distinct latent profiles (entropy = 0.85):
Class 1 (adaptive): high self-regulation, low nomophobia, and low ICT use; presented the best psychological health. Class 4 (vulnerable): low self-regulation, high nomophobia, and high ICT use; reported the poorest health outcomes. Classes 2 and 3 displayed intermediate profiles, with Class 3 showing slightly better health than Class 4.
Differences in psychological health across classes were statistically significant (ANOVA, p < .001). Age and gender were significant predictors of class membership: younger females were more likely to belong to Class 1, whereas older males were more likely to be classified into Classes 3 and 4. Conclusions: LPA enabled the identification of distinct psychological profiles that vary in mental health outcomes and digital vulnerability. Self-regulation emerged as a central protective factor, suggesting the importance of tailored digital interventions to enhance regulatory capacities. These findings reinforce the value of person-centered approaches and highlight the need for scalable strategies to mitigate the mental health risks associated with problematic ICT use in Spanish-speaking populations.
Background: Meniscal injuries are prevalent knee pathologies. However, the public increasingly relies on online video platforms for health information, where content quality and reliability vary signi...
Background: Meniscal injuries are prevalent knee pathologies. However, the public increasingly relies on online video platforms for health information, where content quality and reliability vary significantly, posing risks of misinformation, particularly in China with its extensive platform usage. Objective: This study aimed to evaluate the quality (GQS), reliability (mDISCERN), understandability (PEMAT-U), and actionability (PEMAT-A) of meniscal injury video content on major Chinese platforms (Bilibili and Douyin/TikTok). It also sought to identify key predictive factors for these dimensions and, innovatively, to understand user perspectives, engagement patterns, and feedback through sentiment analysis and topic modeling of user comments. Methods: In this cross-sectional study, 200 top-ranked meniscal injury-related videos (100 from Bilibili, 100 from TikTok) were collected using a specific keyword and assessed by medical experts using GQS, mDISCERN, and PEMAT-A/U. Statistical analyses, performed with SPSS 27.0, included descriptive statistics, Mann-Whitney U tests, Spearman correlations, and stepwise regression. Approximately 22,000 user comments were analyzed using a fine-tuned BERT model for sentiment classification and BERTopic for thematic structure mining. Results: TikTok videos exhibited higher engagement metrics but shorter durations (P < .001). For GQS scores, professional sources were significantly higher than non-professional sources (P < .001), though no significant platform difference was found (P = 0.455). Regarding mDISCERN scores, Bilibili was significantly superior to TikTok (P < .05), yet no significant difference was observed between professional and non-professional sources (P = 0.23). PEMAT-U scores were significantly higher on TikTok compared to Bilibili (P < .001), but actionability (PEMAT-A) was consistently low across all platforms and sources, with no significant differences (P > .05). Regression analysis indicated that content reliability was the strongest predictor of quality, while video duration and quality significantly predicted reliability. Comment sentiment was predominantly neutral (72.4%), followed by positive (18.9%), with negative being the lowest (8.7%). Topic modeling revealed "Functional and Rehabilitation Discussion," "Discussion on Disease," and "Discussion on Treatment" as key themes. Conclusions: Content quality and reliability vary on Chinese video platforms regarding meniscal injury. While professional sources provide higher quality content, their reliability is not statistically superior to non-professional sources in this context. A universal deficiency in video actionability across all platforms and sources highlights a critical "understandable but not actionable" gap. Content creators should prioritize information accuracy and actionability to better empower public health management in the digital age.
Background: Chinese patent ethnomedicines(CPMs) are a form of traditional Chinese patent medicine that originate from the traditional medicines of ethnic minorities and are widely used in clinical pra...
Background: Chinese patent ethnomedicines(CPMs) are a form of traditional Chinese patent medicine that originate from the traditional medicines of ethnic minorities and are widely used in clinical practice. However, existing evidence for their application remains unclear. Therefore, to address this gap, this comprehensive scoping review will be performed to provide an overview of the available evidence from Chinese patent ethnomedicine preparations. Objective: This review aims to provide the evidence profile of oral CPMs. This study will elucidate the current state of the evidence with respect to these medicines and identify research gaps. The detailed steps for conducting this review are outlined in this protocol. This review will contribute to a better understanding of CPMs. Methods: This review will include clinical studies of CPMs irrespective of study design. The frameworks described by Arksey and O'Malley, Levac, and the Joanna Briggs Institute will be used to guide the current scoping review. This review will include six steps: (1) identify the research question;(2) collect information about Chinese patent ethnomedicines from national related drug catalogues; (3) search the MEDLINE (via PubMed), Embase, Web of Science, Cochrane Library and Chinese databases from inception to February 2025 to identify relevant publications; (4) screen the literature against the eligibility criteria; (5) extract data by using a predefined standardized data extraction form; and (6) summarize, discuss, analyse, and report the results. We will also present the results via data visualization techniques. Results: We will synthesize data on CPMs by conducting the Scoping Review, drawing the evidence maps, identifying the clinical research characteristics related to AEs features identifying , as well as highlighting the limitations and gaps in the literature. We expect to publish the results in 2026. Conclusions: The information obtained through this review could inform future research involving CPMs. Clinical Trial: Review registration number https://osf.io/e763b.
Background: Digital media memes have emerged as influential tools in health communication, particularly during the COVID-19 pandemic. While they offer opportunities for emotional engagement and commun...
Background: Digital media memes have emerged as influential tools in health communication, particularly during the COVID-19 pandemic. While they offer opportunities for emotional engagement and community resilience, they also act as vectors for health misinformation, contributing to the global infodemic. Despite growing interest in their communicative power, the role of memes in shaping public perception and misinformation diffusion remains underexplored in infodemiology. Objective: This integrative review aims to analyze how memes influence emotional, behavioral, and ideological responses to health crises, and to examine their dual role as both contributors to and potential mitigators of infodemics. The paper also explores strategies for integrating memes into public health campaigns and infodemic management. Methods: Using an integrative narrative approach, this review synthesizes evidence from 14 peer-reviewed studies, including empirical research on social media behavior, misinformation dynamics, and digital health campaigns. The analysis is grounded in infodemiological and infoveillance frameworks as established by Eysenbach, incorporating insights from psychology, media studies, and public health. Results: Memes function as emotionally salient and visually potent carriers of health-related narratives. While they can simplify complex messages and foster adaptive humor during crises, they are also susceptible to distortion, particularly in echo chambers and conspiracy communities. Findings reveal that misinformation-laden memes often leverage humor and disgust to bypass critical thinking, and their viral potential is linked to emotional intensity. However, memes have also been successfully integrated into prebunking strategies, increasing engagement and reducing susceptibility to false claims when culturally tailored. The review identifies key mechanisms that enhance or hinder the infodemiological value of memes, including political orientation, digital literacy, and narrative framing. Conclusions: Memes are a double-edged sword in the context of infodemics. Their integration into infodemic surveillance and digital health campaigns requires a nuanced understanding of their emotional, cultural, and epistemic effects. Public health institutions should incorporate meme analysis into real-time infoveillance systems, apply evidence-based meme formats in prebunking efforts, and foster digital literacy that enables critical meme consumption. Future infodemiology research should further explore the long-term behavioral impacts of memetic misinformation and the scalability of meme-based interventions.
Background: The rise of generative artificial intelligence (gAI) has created both opportunities and challenges in higher education. Although the potential benefits of learning support are widely recog...
Background: The rise of generative artificial intelligence (gAI) has created both opportunities and challenges in higher education. Although the potential benefits of learning support are widely recognized, little is known about how incoming medical students in Japan perceive and intend to use such technology. Objective: This study investigated the status of gAI usage, learning behaviors, and perceptions of first-year medical students in Japan. Methods: An anonymous online survey was conducted among 118 first-year medical students at Chiba University in April 2025. The questionnaire assessed prior gAI use, willingness to learn, perceptions of gAI, and the intention to use it academically. Likert scales, correlation analyses, and content analyses of free-text responses were used. Results: Of the respondents, 84.7% had prior experience with the gAI, primarily in language learning and information gathering. However, only 49.2% had learning experiences, mostly through informal sources, such as web browsing and peer interaction. Students showed a high willingness to learn about gAI (mean score: 4.3/5.0), which correlated with positive perceptions. Despite this interest, attitudes toward using gAI for academic assignments were neutral (mean 3.0/5.0). Content analysis of the open-ended responses revealed three types of attitudes: positive, cautious, and negative. Conclusions: Although most students used the gAI, their limited exposure to formal learning suggests that self-directed experience alone may not foster confidence or informed use. Neutral attitudes and mixed qualitative responses highlighted the need for structured gAI literacy education that balances the benefits of ethical and critical considerations in medical education.
Background: Necrotizing enterocolitis (NEC) is the most common gastrointestinal emergency affecting preterm infants with high mortality and morbidity. With suboptimal and incomplete methods of prevent...
Background: Necrotizing enterocolitis (NEC) is the most common gastrointestinal emergency affecting preterm infants with high mortality and morbidity. With suboptimal and incomplete methods of prevention of NEC, early diagnosis and treatment can potentially mitigate the impact of NEC. This study explores the application of machine learning techniques, specifically Random Forest and Extreme Gradient Boosting (XG Boost), to improve early and accurate NEC and FIP diagnosis. Objective: To evaluate the effectiveness of sampling techniques in addressing class imbalance and to identify the optimal machine learning (ML) classifiers for predicting necrotizing enterocolitis (NEC) and focal intestinal perforation (FIP) in preterm infants. Methods: We developed ML models using 49 clinical variables from a retrospective cohort of 3,463 preterm infants, using clinical data from the first two weeks of life as input features. We applied various sampling strategies to address the inherent class imbalance, and then combined various sampling strategies with different ML algorithms. Parsimonious models with selected key predictors were evaluated to maintain predictive performance comparable to the full-featured (complex) models. Results: The parsimonious generalized linear model (GLM) with SMOTE sampling achieved an area under the receiver operating characteristic curve (AUROC) of 0.79 for NEC prediction, closely approximating the complex model's AUROC of 0.76. For FIP prediction, parsimonious models of GLM with ADASYN sampling and XG Boost with TOMEK sampling achieved AUROC values exceeding 0.90, comparable to those of the corresponding complex models. For both NEC and FIP, the area under the precision-recall curve (AUPRC) surpassed the respective prevalence rates, indicating strong performance in identifying rare outcomes. Conclusions: We demonstrate that targeted sampling strategies can effectively mitigate class imbalance in neonatal datasets, and simplified models with fewer variables can offer comparable predictive power, enhancing the performance of ML-based prediction models for NEC and FIP.
Background: Workplace stress has emerged as a pressing public health issue in Nigeria, where approximately 75% of employees experience work-related stress significantly higher than the global average....
Background: Workplace stress has emerged as a pressing public health issue in Nigeria, where approximately 75% of employees experience work-related stress significantly higher than the global average. This stress, exacerbated by systemic labor policy gaps, cultural stigma, and economic instability, contributes to burnout, reduced productivity, and economic losses. Despite emerging HRM interventions, mental health remains underprioritized in organizational strategies, particularly within sectors such as healthcare, banking, construction, and the informal economy. There is a critical need for evidence-based, culturally adapted HRM strategies that address these unique challenges in Nigeria’s workforce. Objective: This study seeks to examine the prevalence and sector-specific drivers of workplace stress in Nigeria, evaluate the effectiveness and limitations of current HRM interventions, identify key socio-cultural and structural barriers hindering mental health program implementation, and propose actionable, evidence-based strategies that are contextually tailored to Nigeria’s diverse workforce. Through a synthesis of localized research and global best practices, the study aims to provide a strategic roadmap for enhancing mental health resilience in Nigerian workplaces. Methods: A narrative review methodology was employed, guided by qualitative synthesis and thematic analysis frameworks. Literature was sourced from global and regional databases (PubMed, PsycINFO, AJOL, Scopus) spanning 2018–2024, including peer-reviewed articles, policy reports, and grey literature. Inclusion focused on empirical and policy studies relevant to Nigerian HRM practices. NVivo 12 was used for thematic coding, and a gap analysis framework was applied to identify unaddressed areas. A total of 42 studies met the inclusion criteria. Expert validation and triangulation with global data enhanced rigor. Results: Burnout rates in Nigeria are among the highest globally, with 35% in healthcare, 32% in retail, and 29% in banking. Women and younger workers face disproportionate stress burdens. HRM strategies such as Employee Assistance Programs (EAPs) and Flexible Work Arrangements showed the highest effectiveness but had limited adoption due to cost, stigma, and infrastructure gaps. Digital mental health tools, though cost-effective, had low uptake (23%) due to digital illiteracy. Barriers included cultural stigma, weak labor policies, leadership apathy, and lack of ROI measurement. Promising strategies identified include faith-based EAPs, peer networks, mobile clinics, and stigma-reduction campaigns, particularly when culturally embedded and supported by community leaders. Conclusions: Workplace stress in Nigeria is a systemic challenge rooted in socio-economic, cultural, and organizational structures. Although several HRM interventions show promise, their effectiveness is hindered by low adoption, poor contextual fit, and limited legal enforcement. Evidence suggests that when mental health strategies are localized and culturally endorsed via faith leaders, digital tools, or flexible work, they yield improved employee retention, lower absenteeism, and better organizational resilience.
Background: Digital health platforms that integrate patient-reported outcome measures (PROMs) with wound image submissions offer new opportunities for remote wound surveillance. However, the alignment...
Background: Digital health platforms that integrate patient-reported outcome measures (PROMs) with wound image submissions offer new opportunities for remote wound surveillance. However, the alignment between patient-reported symptoms and physician clinical judgment remains underexplored, particularly in real-world settings. Objective: This study aimed to evaluate the diagnostic performance of PROM-reported wound infection in predicting physician-initiated callbacks and to explore the symptom features associated with patients' perception of infection. Methods: We conducted a retrospective observational study at a tertiary medical center in Taipei, Taiwan. Patients with acute or chronic wounds were enrolled in a chatbot-assisted digital monitoring program between June 30, 2022, and March 1, 2023. Using their mobile devices, patients submitted wound photographs and completed a structured symptom checklist, including indicators such as redness, darkness, and infection. A senior plastic surgeon independently reviewed each image to determine the need for clinical follow-up (callback), which served as the reference outcome. The presence of “infection” in the PROM checklist served as the primary predictor. Sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess predictive accuracy. A secondary analysis examined associations between symptom features and infection reporting using logistic regression. Results: Among 2,297 wound image entries from 270 patients, PROM-reported infection showed high sensitivity (94.0%) and an AUC of 0.9575 (95% CI: 0.9502–0.9648) for predicting physician callbacks. In the acute wound subgroup, the AUC remained high (0.9335). Redness was the strongest correlate of infection reporting (OR = 31.6; 95% CI: 23.1–43.2), while skin darkness was negatively associated with perceived infection in acute wounds (OR = 0.415; 95% CI: 0.203–0.850), indicating potential misinterpretation. Conclusions: Patient-reported infection through a digital platform demonstrated high sensitivity in identifying wounds requiring medical attention. However, notable false-negative rates and symptom misinterpretation underscore the need for improved patient education and real-time decision support. These findings support the utility of PROM-based systems for remote triage and highlight the importance of integrating patient-clinician feedback loops to enhance wound care safety.
Background: Adolescent depression has negative health and economic consequences both in the short and long term. Interventions aimed at improving parenting skills to prevent or reduce depressive symp...
Background: Adolescent depression has negative health and economic consequences both in the short and long term. Interventions aimed at improving parenting skills to prevent or reduce depressive symptoms in adolescents show promise, but there has been limited investigation of the cost-effectiveness of online parenting interventions. Objective: To estimate the economic costs, health-related quality of life outcomes and cost-effectiveness of an online personalised parenting intervention to prevent affective disorders in high-risk adolescents compared to a standard educational package. Methods: A cost-utility analysis was conducted based on data from a randomised controlled trial. The base-case analysis took the form of an intention-to-treat analysis conducted from a UK public sector perspective and separately from a societal perspective. Costs (£ 2022–2023 prices) were collected prospectively over a 15-month follow-up period. A bivariate regression of costs and quality-adjusted life-years (QALYs), with multiple imputation of missing data, was conducted to estimate the incremental cost per QALY gained and the incremental net monetary benefit of the personalised parenting intervention in comparison to the standard educational package. Pre-specified sensitivity analyses and subgroup analyses respectively explored uncertainty and heterogeneity surrounding cost-effectiveness estimates. Results: Participants (n=512) were randomised to the personalised parenting intervention (n=256) or the standard educational package (n=256). Mean (standard error) public sector costs over 15 months were estimated at £2,106 (£442) in the personalised parenting arm versus £1,909 (£388) in the standard educational package arm (mean difference: £197, p=0.740). Mean (standard deviation) observed QALY estimates were estimated at 0.84 (0.32) versus 0.82 (0.33), respectively (mean difference: 0.02, p=0.740). The base case (imputed) analysis generated an incremental cost of £639 (95%CI: £272 to £988) and incremental QALYs of 0.013 (95%CI: -0.021 to 0.042), indicating a 13%-27% probability of cost-effectiveness for the personalised parenting intervention, at cost-effectiveness thresholds of £20,000 and £30,000 per QALY. A sensitivity analysis, using observed data only (without imputation) generated an incremental cost of -£1,096 (95%CI: -£2,964 to £771) and incremental QALYs of 0.120 (95%CI: -0.053 to 0.293), but there was insufficient data to estimate probability of cost-effectiveness. The base-case cost-effectiveness results remained robust to other sensitivity analyses. Conclusions: This study found no evidence that an online parenting programme to prevent affective disorders in high risk adolescents was cost effective compared to a standard educational package. Clinical Trial: ISRCTN63358736
Background: Clinical reasoning is a key skill of the medical profession. In many virtual patient environments, the students enter the diagnoses, and all students receive the same feedback with an expl...
Background: Clinical reasoning is a key skill of the medical profession. In many virtual patient environments, the students enter the diagnoses, and all students receive the same feedback with an explanation why a certain diagnosis is considered correct. Results of meta-analyses highlight the benefits of feedbacking information to students based on their individual answers. Such adaptive feedback is time and resource demanding. Objective: We propose computer-supported adaptive feedback as an interactive, resource-optimised and scalable alternative. Methods: In the current study we compare static expert feedback agains computer supported adaptive feedback in two learning modes, individual and collaborative learning modes. Overall 105 students completed a pre and post test, consisting of 10 multiple choice items and 12 key feature items. In the meantime they diagnosed 8 virtual patients with either adaptive feedback or static feedback, either in the collaborative or individual learning mode. Results: Results indicate that students who received computer supported adaptive feedback outperformed students who received static feedback in the posttest independent from the learning mode. Students who worked in the collaborative learning mode had a higher diagnostic accuracy in the learning phase, but not in the posttest, independent from the feedback given. Conclusions: Considering the novelty of the system in itself and the presentation of the adaptive feedback to the students the results are promising. With future development and implementation of artificial intelligence in the generation of answers the learning of medical students. Until then an NLP-based system, such as the one presented in this study, seems to be a viable solution to provide a large number of students with elaborated adaptive feedback. Clinical Trial: 17-250
Background: The growing challenges in European healthcare systems, such as demographic change and the shortage of healthcare professionals, require innovative concepts to improve care and rehabilitati...
Background: The growing challenges in European healthcare systems, such as demographic change and the shortage of healthcare professionals, require innovative concepts to improve care and rehabilitation of cancer patients. Digital health solutions such as point-of-care devices have the potential to enhance treatment pathways and outcomes by facilitating remote monitoring. However, the implementation of eHealth solutions faces major challenges, such as digital infrastructure gaps, various levels of digital health literacy or cultural resistance. This implementation report explores the initial phase of implementing digital solutions for adult cancer patients across the South Baltic Region. Objective: The main objective is to identify and address common problems, barriers, challenges, and opportunities for the implementation of eHealth solutions. Another goal is to support cancer patients through the integration of innovative digital tools, particularly the HemoScreen™ device for point-of-care blood testing and digitally aided early rehabilitation. Ultimately, the research aims to contribute to a structured, standardized implementation model that can optimize digital health services and improve cancer care in diverse settings. Methods: The AMBeR study involved a series of implementation workshops at seven pilot sites in Germany, Denmark, Poland, Lithuania, and Sweden, with participation from various professional groups in cancer care. A process mapping method combining service design and A3 methodology was used to assess current workflows and plan digital interventions. Stakeholder analyses, contextual assessments, and status quo evaluations were conducted using written notes and structured templates provided in advance at each pilot site to support the workshops. The Consolidated Framework for Implementation Research (CFIR) guided the analysis and ensured systematic identification of potential barriers and enabling factors for the implementation of digital health services. Results: The ten workshops revealed strong stakeholder engagement and cross-professional collaboration, indicating strong commitment to implementing eHealth-based interventions. Most pilot sites reported significant potential for improving cancer care and rehabilitation, highlighting long waiting times, accessibility issues, and the need for better digital solutions as key areas for improvement. Interoperability of digital systems and integration into existing national eHealth infrastructures were identified as critical factors, with noticeable differences among countries. Nations like Denmark and Sweden benefited from robust infrastructures, while Germany faced challenges due to fragmented digital integration. Training healthcare personnel and addressing regulatory and cultural issues emerged as essential for successful implementation. Conclusions: Addressing challenges such as infrastructural disparities, data protection concerns, and variations in digital health literacy is essential for widespread and successful implementation. Joint efforts and stronger political coordination, along with tailored strategies for different cultural contexts, are crucial for optimizing digital health solutions for cancer patients across the South Baltic Sea Region. These findings pave the way for developing a comprehensive electronic model that can serve as a guide for integrating innovative eHealth devices into routine oncology care. Clinical Trial: NCT06809101; NCT06768918
Background: Screen-based media use among children has been increasing, particularly in lower socioeconomic groups. As this behavior is linked to obesogenic habits, such as physical inactivity, poor di...
Background: Screen-based media use among children has been increasing, particularly in lower socioeconomic groups. As this behavior is linked to obesogenic habits, such as physical inactivity, poor dietary habits, and disrupted sleep patterns, it is crucial to examine the associations between screen-based media use and adiposity in primary school children, particularly those from socially vulnerable contexts. Objective: To examine the associations between screen-based media use and adiposity in primary school children from socially vulnerable contexts. Methods: This study, part of the BeE-school Project, included 735 children (mean age 7.7 ± 1.2 years) from 10 primary schools located in vulnerable contexts in northern Portugal. Researchers recorded weight, height, and waist circumference, and then Body mass index z-scores (BMIz) and waist-to-height ratio (WHtR) were calculated. Screen-based media use was also reported by parents using the ScreenQ tool that includes four domains (screen access, frequency of use, media content and caregiver-child co-viewing). Sociodemographic and anthropometric data of parents were reported via questionnaire. Generalized Linear Models (GLM) were applied. Results: Higher screen-based media use score was associated with higher BMIz and WHtR (b = 0.064, 95% CI 0.034 to 0.094; b = 0.002, 95% CI 0.001 to 0.003, respectively) even after adjusting for relevant variables. Similar associations were observed for the domains of screen access, frequency of use, and media content. Conclusions: Screen-based media use is linked to increased adiposity in vulnerable children. Reducing screen access, limiting usage frequency, and curating media content could improve health outcomes. Interventions for obesity prevention should consider these factors. Clinical Trial: This study is part of the BeE-school project, a cluster randomized controlled trial registered on ClinicalTrials.gov (identifier NCT05395364).
Background: Transcatheter aortic valve implantation (TAVI) has become a potential treatment modality for symptomatic patients with severe aortic stenosis (AS) across all surgical risk profiles. Howeve...
Background: Transcatheter aortic valve implantation (TAVI) has become a potential treatment modality for symptomatic patients with severe aortic stenosis (AS) across all surgical risk profiles. However, peri-procedural stroke remains a persistent and serious complication with significant implications for patient outcomes and healthcare systems. Reported incidence ranges between 2-7%. As the benefit of cerebral protection devices and the optimal antithrombotic regime following TAVI remain unclear, understanding contemporary risks and timings of stroke are important in order to tailor peri/post procedural stroke risk reduction strategies. Objective: To evaluate the incidence, timing, and predictors of stroke and transient ischaemic attack (TIA) within 30 days post TAVI in a contemporary real-world all-comers registry. Methods: Consecutive patients undergoing TAVI (n=980) between January 2020 and February 2024 were included in this retrospective study. A stroke diagnosis was made based on the Valve Academic Research Consortium-2 (VARC-2), defined as a focal or global neurological deficit >24 hours or <24 hours if haemorrhage or infarct was found on neuroimaging. TIA was defined as the duration of a focal or global neurological deficit <24 hours. Those with documented evidence of stroke or TIA were sub-divided into acute (<24 hours post procedure) and subacute (1-30 days post procedure). Patients from outside our catchment area were excluded (n=46) due to the lack of access to patient records. Two patients were excluded as no valve was deployed. Results: A total of 932 patients (41% female, mean age 81.6±6.9 years) were included in the study. TAVI was performed for severe AS in the context of degenerative calcific disease of native valves in 94% (n=873), 6% (n=57) of TAVIs were valve-in-valve procedures, and only one patient was treated for severe stenosis of a congenital (Bicuspid) aortic valve. 84% (n=779) had no prior history of stroke and 26% had a history of diabetes mellitus. 60% (n=555) of patients were in sinus rhythm prior to TAVI, 35% (n=326) were in atrial fibrillation or flutter and 5% (n=51) were in a paced rhythm. Self-expanding valves were implanted in 58% (n=542) of cases and Balloon-expanding valves were used in the remainder. The majority of cases were performed transfemorally (96%). Pre-dilatation balloon aortic valvuloplasty was performed in 16% (n=150) of cases and the median procedure time was 76 mins [IQR 66.0, 89.0]. Vascular closure device successfully achieved haemostasis in 94% (n=877) of procedures. The thirty-day incidence of stroke/TIA was 3.2% (n=30), with 35% (n=11) occurring within 24 hours and the majority occurring within the first 48 hours (58%, n=18). The median number of days of stroke/TIA post-TAVI was 1.0 [0.0, 3.0]. Most (80%, n=24) were ischaemic strokes and of these one had a haemorrhagic transformation. Diabetes is the only variable predictive of stroke at 30 days HR 2.14 (95% CI 1.01 – 4.56), p=.049 using logistic regression. Conclusions: Most cerebrovascular events occurred early post TAVI. Effective stroke prevention strategies, including optimized antithrombotic regimens and the role of cerebral protection devices, warrant further evaluation. Clinical Trial: n/a
Background: Background:Heart failure (HF) presents a significant global burden, with high rates of readmission and mortality despite advancements in inpatient care. Post discharge continuity of care...
Background: Background:Heart failure (HF) presents a significant global burden, with high rates of readmission and mortality despite advancements in inpatient care. Post discharge continuity of care remains fragmented due to patients’ limited self-management capacity, complex comorbidities, and inadequate clinical follow-up. Artificial intelligence (AI) has emerged as a promising technology to enhance continuity care by enabling real-time monitoring, personalized decision support, and early risk prediction. Objective: Objective:This scoping review aims to (1) identify key AI application domains in the continuity of care for patients with heart failure, (2) evaluate its impact on patient-level and system-level outcomes, and (3) summarize barriers to real-world implementation. Methods: Methods: We conducted a scoping review following Arksey and O’Malley’s methodological framework and PRISMA-ScR guidelines. Seven electronic databases (PubMed, EMBASE, CINAHL, Web of Science, MEDLINE, Cochrane Library, IEEE Xplore) were searched up to March 2025. Eligible studies included AI-based interventions supporting HF patients in nonhospital settings. Two reviewers independently screened and extracted data, followed by narrative synthesis and thematic analysis. Results: Results: Twenty-eight studies were included. AI technologies were primarily applied in remote monitoring (wearables, implantable), individualized self-care tools (chatbots, smart applications), clinical decision support systems, and predictive models. These interventions led to improved medication adherence, reduced 30-day readmission rates (up to 38% reduction), and enhanced quality of life. However, real-world implementation was challenged by high false alarm rates (up to 35%), algorithm opacity, digital literacy gaps among older patients, and reimbursement limitations. Conclusions: Conclusions: AI-powered interventions hold potential to transform continuity of care for heart failure by enabling timely, data-driven, and personalized support. However, to bridge the gap between innovation and implementation, future efforts must focus on algorithm explainability, user-centered design, system interoperability, and robust governance for data privacy and accountability. Clinical Trial: The scoping protocol was registered at Open Science Framework.
Background: Serious games are increasingly recognized as effective tools for healthcare interventions, particularly for adolescents with behavioral and developmental needs. However, inconsistent desig...
Background: Serious games are increasingly recognized as effective tools for healthcare interventions, particularly for adolescents with behavioral and developmental needs. However, inconsistent design frameworks and limited integration of theoretical concepts challenge their scalability and impact. Understanding how these concepts are applied in serious game design is essential for enhancing their real-world impact. Objective: The objective of this systematic review is to examine the current state of the art in the use of serious gaming interventions in healthcare for adolescents with behavioral or developmental issues. The review will focus on elucidating the elements involved in how these games are designed and can contribute to learning. The review is conducted from the theoretical framework perspectives of boundary crossing, transfer and a model of reality. Methods: A total of five databases (PubMed, Scopus, ERIC, PsycINFO and EMBASE) were searched for relevant titles and abstracts. The databases were identified as relevant and cover a wide range of published research into health and social science. Results: A total of 34 relevant studies were included in the review, which covered a range of serious gaming artefacts with the objective of identifying learning or development opportunities for adolescents with behavioral or developmental issues. Conclusions: This review highlights the transformative potential of serious games in healthcare, particularly for individuals with developmental and behavioral needs, by fostering skill acquisition, collaboration, and real-world application. Despite their potential, the development of serious games requires a more structured integration of theoretical frameworks to ensure scalability, replicability, and sustained impact. Future research should prioritize standardized methodologies, longitudinal evaluations, and a focus enhanced collaboration.
Lung cancer continues to pose a global health burden, with delayed diagnosis contributing significantly to mortality. This study aimed to identify the most predictive behavioural, physiological, and p...
Lung cancer continues to pose a global health burden, with delayed diagnosis contributing significantly to mortality. This study aimed to identify the most predictive behavioural, physiological, and psychosocial factors associated with lung cancer in a young adult population using a multivariate logistic regression framework. A dataset of 276 respondents was analysed after removing duplicates from an original sample of 309. The dependent variable was self-reported lung cancer status, while independent variables included smoking behaviour, symptoms such as fatigue and coughing, and indicators of chronic disease and psychosocial stress. Univariate and bivariate analyses were conducted prior to model development. Nine predictors demonstrated statistical significance and were retained in the final model. The model exhibited strong predictive performance, achieving an AUC of 0.9625 and Tjur’s R² of 0.566, with no evidence of multicollinearity among predictors. Fatigue, chronic disease, coughing, and swallowing difficulty emerged as the most influential risk factors, while smoking had a comparatively smaller effect size, likely due to the young age profile of participants. Peer pressure and yellow fingers were also significant, offering novel contextual insights into behavioural risk adoption. The findings support the integration of multidimensional, low-cost, self-reported indicators into lung cancer screening protocols, especially in resource-limited settings. This study provides a data-driven foundation for developing early detection models and public health interventions tailored to younger populations. Future research should incorporate longitudinal and biomarker data to enhance causal inference and predictive accuracy.
Abstract
Acute promyelocytic leukemia (APL), a subtype of acute myeloid leukemia (AML), is characterized by the t(15;17)(q22;q21) translocation, resulting in the PML/RARα fusion protein. All-trans r...
Abstract
Acute promyelocytic leukemia (APL), a subtype of acute myeloid leukemia (AML), is characterized by the t(15;17)(q22;q21) translocation, resulting in the PML/RARα fusion protein. All-trans retinoic acid (ATRA) is an effective treatment for APL. Among the most severe side effects of ATRA is differentiation syndrome. While skin toxicity is common, scrotal lesions, including ulcerations, are rarely reported, and their pathogenesis remains unclear. We present a case of a 24-year-old male diagnosed with APL who developed painful scrotal ulcers on day 23 of ATRA therapy. These ulcers responded to the discontinuation of ATRA and treatment with topical corticosteroids. Discontinuing ATRA can potentially compromise the hematological response, leading most clinicians to continue ATRA in combination with steroid therapy. However, ATRA should be discontinued if steroid therapy fails. Awareness of this rare adverse effect is essential to ensure timely and appropriate therapeutic management.
Background: Teeth that have undergone endodontic treatment are more likely to fracture because of the remarkable loss of tooth structure. Various post systems, like prefabricated carbon fiber posts, c...
Background: Teeth that have undergone endodontic treatment are more likely to fracture because of the remarkable loss of tooth structure. Various post systems, like prefabricated carbon fiber posts, customized glass fiber posts, have been used to restore endodontically treated teeth (ETT). However, the effectiveness of these in enhancing fracture resistance remains a subject of debate. Objective: To evaluate and compare the fracture resistance of endodontically treated teeth restored using 3 different post types: prefabricated carbon fiber post, custom-made glass fiber post, and SFRC-relined fiber post. Methods: A total 30 extracted human teeth would undergo endodontic treatment and will be segregated into 3 groups based on the post type
1: Pre-fabricated carbon fiber posts
2: Customized glass fiber posts
3: SFRC-relined fiber posts
The samples would be subjected to a universal testing machine to assess their fracture resistance. Data will undergo statistical analysis using ANOVA and post-hoc test. Results: Mean fracture resistance is expected to be highest in the SFRC-relined fiber post group, followed by customized glass fiber post group, and lowest of prefabricated carbon fiber post group. Statistically significant differences are anticipated among groups (p < 0.05). The SFRC-relined fiber posts are also expected to demonstrate more favorable failure modes compared to the other groups. Conclusions: The study suggests that SFRC-relined fiber posts provide superior fracture resistance and more favorable failure modes in comparison with prefabricated carbon fiber and custom-made glass fiber post. This finding highlights potential clinical benefits of using SFRC-relined fiber post. Clinical Trial: Since this investigation will be conducted entirely as an in vitro study, registration with the Clinical Trials Registry - India (CTRI) will not be applicable and therefore not required.
Background: Non-inferiority (NI) trial designs that investigate whether an experimental intervention is no worse than standard of care have been used increasingly in recent years. The robustness of th...
Background: Non-inferiority (NI) trial designs that investigate whether an experimental intervention is no worse than standard of care have been used increasingly in recent years. The robustness of the conclusions are in part dependent on the analysis population set used for the analysis. In the NI setting, the intention-to-treat (ITT) analysis has been thought to be anti-conservative compared to the per-protocol (PP) analysis. Objective: We aim to conduct a methodological review assessing the analysis population set used in NI trials. Methods: A comprehensive electronic search strategy will be used to identify studies indexed in Medline, Embase, Emcare, and Cochrane Central Register of Controlled Trials (CENTRAL) databases. Studies will be included if they are non-inferiority trials published in 2024. The primary outcome is the analysis population used in the primary analysis of the trial (ITT or PP). Secondary outcomes will be the NI margin, effect estimates, point estimates, and corresponding confidence intervals of the analysis. Analysis will be done using descriptive statistics. Results: 1211 studies were captured using the comprehensive search strategy. We estimate around 500 trials will be eligible for extraction. Conclusions: This methodological survey of NI trials will describe the population analysis set used in primary analysis and assess factors which could be associated with each analysis
Background: The anesthesia and critical care residency program in Morocco is a four-year, time-based training program whose effectiveness is evaluated in our study through the performance of residents...
Background: The anesthesia and critical care residency program in Morocco is a four-year, time-based training program whose effectiveness is evaluated in our study through the performance of residents and the factors affecting it, based on the core competencies established by the Moroccan Board of Anesthesiology and Critical Care (MBACC). Objective: To describe anesthesia and critical care residents’ performance and its affecting factors. Methods: We conducted a single-center prospective survey in January 2024, using a self-assessment questionnaire of technical skills related to residents' practice. For each skill, we addressed questions quantifying a given item's difficulty or success rate. An overall performance composite score was calculated based on the scores obtained for each skill assessed. A multivariate analysis was performed to determine the factors affecting this performance. Results: We included 66 residents. Their end-of-course overall performance met MBACC requirements at the end of the curriculum (72.3 [68.5-75.7] for a maximum score of 100), with a progression marked by a plateau between the second and third year. Multivariate analysis identified a prior experience to residency, shift leadership, and the number of patients anesthetized per day as factors improving the overall performance, while critical care-induced stress, shift-induced stress, and the number of shifts per week reduced performance. Conclusions: The progression of residents' overall performance is eligible for optimization through an introduction to critical care, notably via simulation, to reduce the stress during practice and acquire sufficient experience to occupy a chief position during shifts while limiting the number of weekly shifts. The formulation of recommendations requires a higher level of proof, which implies an external confirmatory study on a multi-center scale.
Background: First responders play crucial roles for protecting citizens and communities from various hazards. Due to the high-stress nature of their work, first responders suffer significant mental he...
Background: First responders play crucial roles for protecting citizens and communities from various hazards. Due to the high-stress nature of their work, first responders suffer significant mental health issues. Existing mental health interventions, albeit their benefits, do not target cognitive processing of traumatic events such as memory and emotion. Objective: As a novel attempt using immersive virtual reality, the current work aims to examine effects of a semantically irrelevant virtual reality (SIVR) content to intervene in the retrieval of an adverse event memory and associated emotion. Methods: A total of 107 participants were recruited in the experiment and randomly assigned to one of three groups: Control Group, Comparison Group, and Intervention Group. In Stage-1, participants in all groups watched a short video of a house fire. In Stage-2, Control Group stayed seated without doing anything. Comparison Group read a text paragraph about Egyptian Ocean, as semantically irrelevant follow-up information. Intervention Group watched a 360° VR video of Egyptian Ocean. Positive And Negative Affect Schedule survey was administered each after the two stages. In Stage-3, the memory accuracy of the house fire video was assessed using a forced recognition test of 15 pairs of a true image and a fake image, generated by AI software. Results: One-way ANOVA revealed no difference of the memory accuracy between three groups. However, repeated measures ANOVA found that the SIVR experience significantly boosted positive emotion of Intervention Group participants and reduced negative feelings of participants in all groups. Conclusions: Our findings suggest that SIVR serves as a quick and affordable way to address psychological reaction after watching a traumatic event. Future research is required to generate the memory suppression effect of the SIVR content.
Background: Empty Nose Syndrome (ENS) is a debilitating condition that can occur after partial or total turbinectomy, leading to impaired nasal airflow sensation, breathing difficulties, and sleep dis...
Background: Empty Nose Syndrome (ENS) is a debilitating condition that can occur after partial or total turbinectomy, leading to impaired nasal airflow sensation, breathing difficulties, and sleep disturbances. While ENS is often diagnosed using the ENS6Q questionnaire, its precise causes remain unclear. Some patients with significant turbinate loss develop minor ENS symptoms, whereas others experience severe symptoms after minor mucosal cauterization. Understanding the structural and aerodynamic factors contributing to ENS is crucial for improving diagnosis and prevention. Objective: This study aims to identify correlations between the ENS6Q score and key anatomical and aerodynamic parameters obtained from computational fluid dynamics (CFD) simulations in ENS patients. Methods: We reconstructed patient-specific nasal cavity models from computed tomography (CT) scans and performed CFD simulations. The analysis focused on five key parameters: the remaining turbinate volume, total mucosal surface area, nasal resistance, average cross-sectional area, and airflow imbalance between the two nasal cavities. These parameters were then compared to ENS6Q scores. Results: Preliminary findings suggest that a lower remaining turbinate volume, reduced mucosal surface area are associated with higher ENS6Q scores. Additionally, significant airflow asymmetry between the two nasal cavities appears to correlate with more severe symptoms. Furthermore, our data indicate that individuals with larger nasal cavities and greater preoperative mucosal surface area tend to be more resilient to turbinectomy. For an equivalent amount of turbinate resection, patients with initially smaller nasal cavities thus having less mucosal surface experience more severe ENS symptoms. Conclusions: By quantifying the anatomical and aerodynamic characteristics of ENS patients, this study provides new insights into the structural factors contributing to ENS severity. These findings may help refine diagnostic criteria and guide surgical approaches to minimize ENS risk.
Background: Long-term disease status and susceptibility to disease recurrence lead to an increasing disease burden in patients with non-Hodgkin's lymphoma (NHL). Although adverse influence of frailty...
Background: Long-term disease status and susceptibility to disease recurrence lead to an increasing disease burden in patients with non-Hodgkin's lymphoma (NHL). Although adverse influence of frailty in physical symptoms has been repeatedly reported, little attention has been paid to NHL patients and with the very limited studies, mostly are cross-sectional in nature. Our protocol provide a detailed mothods to explore the trajectory type and risk factors for frailty in NHL patients, to provide a panorama about how frailty affects NHL patients over time. Objective: The research aims to explore the frailty trajectories and influencing factors. It could offer healthcare professionals dynamic insights into frailty progression and facilitate the early identification and intervention of high-risk populations through systematic screening of contributing factors, thereby preventing the onset of frailty. Methods: This longitudinal mixed-methods study will recruit 240 patients newly diagnosed with NHL from five large public hospitals in China. Quantitative data will be collected at three time points: before chemotherapy, during the third cycle of chemotherapy, and at the end of chemotherapy. We will use validated questionnaires (i.e Tilburg Frailty Indicator) to gather information on sociodemographic data, frailty, cognition, physical condition, health literacy, anxiety and nutrition. Qualitative data will be collected via semi-structured interviews and observations at the end of chemotherapy. The growth mixture model and logistic regression analysis will be used to analyse quantitative data, and the diachronic analysis method and the directed content analysis method will be used to analysis qualitative data. Both types of data will be analyzed in parallel and separately. Finally, we will integrate the data sets to identify areas of confirmation, complementation or discordance. Results: The research protocol and informed consent form were approved by the Medical Ethics Committee of the First Affiliated Hospital of Henan University of Science and Technology (2024-03-K171). Participant recruitment began in Sep 2024. As of April 2025, the data collection for T0 (prechemotherapy) was successfully completed, with a total of 270 patients enrolled in the study. At T1 (the third cycle of chemotherapy), follow-up assessments have been conducted for 157 participants. To date, 8 patients have been lost to follow-up due to various reasons, including 4 deaths, 2 refusals to continue participation, and 2 transfers to other medical facilities. Additionally, the data collection at T2 (end of chemotherapy) has been finalized for 78 patients. Data analysis is scheduled to begin in October 2025, with the results anticipated to be published in January 2026. Conclusions: As a pilot trial, the research could offer healthcare professionals dynamic insights into frailty progression and facilitate the early identification and intervention of high-risk populations through systematic screening of contributing factors, thereby preventing the onset of frailty. Clinical Trial: ChiCTR2500097921
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance,...
Background: Successful Research and MedTech collaborations depend on six key components: talent and workforce development, innovative solutions, robust research infrastructure, regulatory compliance, patient-centered care, and rigorous evaluation.
Institutional leaders frequently navigate multiple professional identities; simultaneously serving as educators, researchers, clinicians, and innovators; creating bridges between academic rigor and practical application that accelerate the translation of research into meaningful solutions. Institutions and organizations may also need to broaden their identities.
The contemporary landscape presents significant challenges as institutions balance the pursuit of academic excellence with the need for rapid responsiveness to technological and commercial innovation. Traditional research processes, while ensuring quality, often impede the pace of advancement necessary in today's rapidly evolving environment. This tension necessitates structural reforms across multiple dimensions of institutional operation.
To cultivate a thriving research and innovation ecosystem, several essential components must be established:First, institutions require agile research infrastructure with cutting-edge laboratories and collaboration spaces, specialized equipment, and certified research professionals specifically trained in device development and regulatory compliance. Robust clinical management platforms can expedite trials and streamline data extraction for publication and dissemination. Objective: The Orange County (OC) Impact Conference, held in November 2024, convened 180 key stakeholders from the life sciences, technology, medical device, and healthcare sectors. CHOC Research in collaboration with University Lab Partners (ULP) and the University of California, Irvine, provided this platform for leaders, decision-makers, and experts to discuss the intersection of innovation in research, healthcare, biotechnology, and data science. Methods: We convened a multidisciplinary symposium (180 participants) to examine advancements in life sciences and medical device research development. The structured forum incorporated moderated panel discussions and a keynote speaker. Participants represented diverse stakeholder categories including research scientists, clinicians, investors and financiers, and executive research and healthcare leadership. The event design facilitated both structured knowledge exchange and strategic networking opportunities aimed at identifying implementation pathways to enhance clinical impact. Results: The 2024 OC Impact Conference Proceedings outline a strategy for healthcare innovation, demonstrating how targeted collaboration between patients, families, researchers, clinicians, engineers, data scientists, and industry is reshaping the healthcare innovation ecosystem. This integrated approach ensures every stakeholder's voice contributes to meaningful advancement, guiding resource allocation and partnership development across the life science and medical device sectors. Our findings demonstrate that success requires moving beyond traditional approaches to patient-driven research priorities, augmented design principles for medical device development, and direct engagement between innovators, research participants, industry and healthcare centers throughout the research development cycle. Conclusions: The insights gained through participation in the OC Impact Conference contribute to the ongoing discourse in these fields, emphasizing collaborative efforts to enhance pediatric and adult healthcare outcomes. Clinical Trial: N/A
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages...
Background: Nigeria faces severe economic losses ($14 billion annually) and high youth unemployment (33.3%) due to persistent skills gaps, exacerbated by sectoral disparities (e.g., 68% ICT shortages vs. 63% agricultural deficits) and systemic inequities in education and vocational access. Despite growing HRM interventions, empirical evidence on their efficacy remains limited, necessitating a comprehensive review to guide policy. Objective: This study analyzes Nigeria’s sector-specific skills gaps, evaluates the effectiveness of HRM interventions (apprenticeships, digital upskilling, PPPs), and proposes actionable frameworks to align workforce development with labor market demands. Methods: A narrative review of peer-reviewed literature (2015–2023), institutional reports (World Bank, PwC, NBS), and case studies (e.g., Andela’s model) was conducted. Data were synthesized to compare regional benchmarks (Kenya’s TVET, South Africa’s HRM reforms) and Nigeria’s performance (talent readiness score: 42/100). Results: Key findings include: (1) Vocational training (60% readiness) outperforms tertiary education (40%); (2) Apprenticeships and PPPs show high impact (30% job placement increase); (3) Urban-rural and gender disparities persist (women 30% less likely to access training). Private-sector models demonstrate scalability but require policy support. Conclusions: Nigeria’s skills crisis demands urgent, context-sensitive interventions. Blended strategies (e.g., industry-aligned curricula, gender-inclusive vocational programs) could unlock 5% annual GDP growth. Prioritize: (1) National skills councils to standardize certifications; (2) Tax incentives for employer-led training; (3) Digital infrastructure for rural upskilling. Closing Nigeria’s skills gaps would mitigate economic losses, reduce inequality, and enhance global competitiveness, transforming its youth bulge into a sustainable demographic dividend.
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access...
Background: Central venous catheterization (CVC) is a very common procedure performed across medical and surgical wards as well as intensive care units. It provides relatively extended vascular access for critically ill patients, in order to the administer intricate life-saving medications, blood products and parenteral nutrition.
Major vascular catheterization provides a risk of easy accessibility and dissemination of catheter related infections as well as venous thromboembolism. Therefore, its crucial to ensure following standardized practices while insertion and management of CVC in order to minimize the infection risks and procedural complications. The aim of these central line insertion guidelines is to address the primary concerns related to predisposition of Central line associated blood stream infections (CLABSI). These guidelines are evidence based and gathered from pre-existing data associated with CVC insertion.
The most common used sites for central venous catheterization are internal jugular and subclavian veins as compared to femoral veins. Catheterization of these vessels enables healthcare professionals to monitor hemodynamic parameters while ensuring lower risks of CLABSI and thromboembolism. Femoral vein is less preferred due to advantage of invasive hemodynamic monitoring and low risk of local infection and thromboembolic phenomena.
CVC can be inserted using Landmark guided technique and ultrasound guided techniques. Following informed consent, the aseptic technique for CVC insertion includes performing appropriate hand hygiene and ensuring personal protective measures, establishing and maintaining sterile field, preparation of the site using chlorhexidine, and draping the patient in a sterile manner from head to toe. Additionally, the catheter is prepared by pre-flushing and clamping all unused lumens, and the patient is placed in the Trendelenburg position. Throughout the procedure, maintaining a firm grasp on the guide wire is essential, which is subsequently removed post-procedure. It is followed by flushing and aspirating blood from all lumens, applying sterile caps, and confirming venous placement. Procedure is ended with cleaning the catheter site with chlorhexidine, and application of a sterile dressing.
Hence, formal training and knowledge of standardized practices of CVC insertion is essential for health care professionals in order to prevent CLABSI. Our audit assesses the current practices of doctors working at a tertiary care hospital to analyze their background knowledge of standard practices to prevent CLABSI during insertion of CVC. Objective: This study was aimed to audit and re-audit residents’ practices of central venous line insertion in medical and nephrology units of A Tertiary Care Hospital of Rawalpindi, Pakistan and to assess the adherence of residents to checklist and practice guidelines of CVC insertion implemented by John Hopkins Hospital and American Society of Anesthesiologists. Methods: This audit was conducted as a cross sectional direct observational study and two-phase quality improvement project in the Medical and Nephrology Units of a Tertiary Care Hospital of Rawalpindi from December 2023 to February 2024.
After taking informed consent from patients and residents, CVC insertion in 34 patients by 34 individual residents was observed. Observers were given a purposely designed observational tool made from John Hopkins Medicine checklist and ASA practice guidelines for central line insertion, for assessment of residents’ practices.
First part contained questions regarding the demographic details of residents such as age, gender, year of post graduate training, and parent department, and data related to the procedure such as date and time of procedure, need of CVC discussion during rounds, site of CVC insertion, catheter type and type of procedure (Landmark guided CVC or Ultrasound guided CVC insertion). Second part included direct observational checklist based on checklist provided for prevention of intravascular catheter-associated bloodstream infections to audit the practices of residents during CVC insertion that included: adequate hand hygiene before insertion, adherence to aseptic techniques, using sterile personal protective equipment and sterile full body drape of patient, choosing the best insertion site to minimize infections based on patient characteristics.
The parameters observed to be done completely were scored "1" and the items not done were scored "0". The cumulative percentage of performed practices according to checklist, was satisfactory if it was 80% or more and unsatisfactory if it was less than 80%.
After initial audit, participants were given pamphlets with checklist incorporating John Hopkins Medicine checklist and ASA practice guidelines for CVC insertion. Re audit was performed one month after the audit, including same participants who participated in initial audit. The results of audit and re-audit were analyzed using SPSS version 25. Mean +/- SD was calculated for quantitative variables and Number (N) percentage was calculated for qualitative variables. Z- Test was applied on proportions of parameters and test scores to calculate Z –score and P value (<0.05 was significant). Results: Among the 34 participants, 44% of the participants belonged to Nephrology Department and 56% of participants belonged to Department of Internal Medicine.
32.3% residents were in their first year, 14.7% in second, 14.7 in third year, 17.6% in fourth year and 17.6% in 5th/Final year of training.
47% of the participants were male and 53% were female. Participants were aged between 27 and 34 years old, the median age at the time of audit was 29 years.
Landmark guided CVC insertion was performed in Subclavian Vein (73.5%) and Internal Jugular Vein (26.5%).
Post audit practices were improved from 73.5% to 94%. Conclusions: Our audit found that many of the residents adopted inadequate practices because of lack of proper training and institutional guidelines for CVC insertion. Our re-audit elaborated an improvement in the practices of residents following intervention with educational material. Our study underscores the importance of structured quality improvement initiatives in enhancing clinical practices and patient outcomes.
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking dec...
Background: Social media has profoundly transformed consumer behavior and marketing practices within the hospitality industry. Understanding how these changes influence hotel selection and booking decisions, the effectiveness of social media strategies, and shifts in reputation management practices is crucial for hotels aiming to enhance their digital presence and customer engagement. Objective: The study aims to analyze the influence of social media on consumer behavior, audience engagement, and reputation management in hotel selection and booking decisions as well as compare pre- and post-social media reputation management practices. Methods: Data was collected through surveys and interviews with hotel guests and marketing professionals. The analysis included descriptive statistics and comparative assessments of pre- and post-social media reputation management practices. The effectiveness of various social media strategies was evaluated based on respondent feedback. Results: The findings indicate that promotional offers, user reviews, and visual content significantly influence consumer behavior in hotel selection and booking decisions. Collaboration with influencers, user-generated content, live video content, and social media advertising are the most effective strategies for audience engagement and brand building, each with a 100% effectiveness rate. There is a notable shift in reputation management practices, with a decrease in promptly addressing issues and providing compensation, and an increase in seeking private resolutions through direct messages post-social media. Conclusions: Social media plays a critical role in shaping consumer behavior and brand perception in the hotel industry. Effective social media strategies, particularly those involving influencers and user-generated content, are essential for engaging audiences and building brand identity. The transition to social media has also led to changes in reputation management, emphasizing the importance of balancing transparency with discreet conflict resolution. Hotels should prioritize comprehensive social media strategies that include collaboration with influencers, regular updates, and engaging content. Encouraging positive user-generated content and implementing robust monitoring and response systems are essential. Training staff on social media engagement and conflict resolution can further improve reputation management. Ongoing adaptation to emerging social media trends is crucial for maintaining effectiveness. This study provides valuable insights into the impact of social media on consumer behavior and marketing in the hospitality industry. By identifying effective social media strategies and examining changes in reputation management, it offers practical guidance for hotels seeking to enhance their digital presence and customer engagement. The findings underscore the importance of leveraging social media to achieve greater business success and maintain a positive brand reputation.
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Heal...
Background: Noncommunicable diseases (NCDs) pose a significant burden in the Philippines, with cardiovascular and cerebrovascular diseases among the leading causes of mortality. The Department of Health implemented the Philippine Package of Essential Non-Communicable Disease Interventions (Phil PEN) to address this issue. However, healthcare professionals faced challenges in implementing the program due to the cumbersome nature of the multiple forms required for patient risk assessment. To address this, a mobile medical app, the PhilPEN Risk Stratification app, was developed for community health workers (CHWs) using the extreme prototyping framework. Objective: This study aimed to assess the usability of the PhilPEN Risk Stratification app using the (User Version) Mobile App Rating Scale (uMARS) and to determine the utility of uMARS in app development. The secondary objective was to achieve an acceptable (>3 rating) score for the app in uMARS, highlighting the significance of quality monitoring through validated metrics in improving the adoption and continuous iterative development of medical mobile apps. Methods: The study employed a qualitative research methodology, including key informant interviews, linguistic validation, and cognitive debriefing. The extreme prototyping framework was used for app development, involving iterative refinement through progressively functional prototypes. CHWs from a designated health center participated in the app development and evaluation process – providing feedback, using the app to collect data from patients, and rating it through uMARS. Results: The uMARS scores for the PhilPEN Risk Stratification app were above average, with an Objective Quality rating of 4.05 and a Personal Opinion/Subjective Quality rating of 3.25. The mobile app also garnered a 3.88-star rating. Under Objective Quality, the app scored well in Functionality (4.19), Aesthetics (4.08), and Information (4.41), indicating its accuracy, ease of use, and provision of high-quality information. The Engagement score (3.53) was lower due to the app's primary focus on healthcare rather than entertainment. Conclusions: The study demonstrated the effectiveness of the extreme prototyping framework in developing a medical mobile app and the utility of uMARS not only as a metric, but also as a guide for authoring high-quality mobile health apps. The uMARS metrics were beneficial in setting developer expectations, identifying strengths and weaknesses, and guiding the iterative improvement of the app. Further assessment with more CHWs and patients is recommended. Clinical Trial: N/A
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in cli...
Among the countless decisions healthcare providers make daily, many clinical scenarios do not have clear guidelines, despite a recent shift towards the practice of evidence-based medicine. Even in clinical scenarios where guidelines do exist, these guidelines do not universally recommend one treatment option over others. As a result, the limitations of existing guidelines presumably create an inherent variability in provider decision-making and the corresponding distribution of provider behavioral variability in a clinical scenario, and such variability differs across clinical scenarios. We define this variability as a marker of provider uncertainty, where scenarios with a wide distribution of provider behaviors have more uncertainty than scenarios with a narrower provider behavior distribution. We propose four exploratory analyses of provider uncertainty: (1) field-wide overview; (2) subgroup analysis; (3) provider guideline adherence; and (4) pre-/post-intervention evaluation. We also propose that uncertainty analysis can also be used to help guide interventions in focusing on clinical decisions with the highest amounts of provider uncertainty and therefore the greatest opportunity to improve care.
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy wh...
This study investigates the behavioral dynamics of sociopaths, focusing on their reliance on glibness (superficial charm) as a primary manipulation tactic and aggressiveness as a secondary strategy when charm fails. Sociopathy, characterized by manipulative tendencies and a lack of empathy, often manifests in adaptive yet harmful behaviors aimed at maintaining control and dominance.
Using the Deenz Antisocial Personality Scale (DAPS-24) to collect data from 34 participants, this study examines the prevalence and interplay of these dual strategies. Findings reveal that sociopaths employ glibness to disarm and manipulate, transitioning to aggressiveness in response to resistance. The implications for understanding sociopathic manipulation are discussed, emphasizing the importance of early detection and intervention in both clinical and social contexts.
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, begi...
Background: Abstract (237 words)
The American Civil War has been commemorated with a great variety of monuments,
memorials, and markers. These monuments were erected for a variety of reasons, beginning with
memorialization of the fallen and later to honor aging veterans, commemoration of significant
anniversaries associated with the conflict, memorialization sites of conflict, and celebration of
the actions of military leaders. Sources reveal that during both the Jim Crow and Civil Rights
eras, many were erected as part of an organized propaganda campaign to terrorize African
American communities and distort the past by promoting a ‘Lost Cause’ narrative. Through
subsequent decades, to this day, complex and emotional narratives have surrounded interpretive
legacies of the Civil War. Instruments of commemoration, through both physical and digital intervention approaches, can be provocative and instructive, as the country deals with a slavery legacy and the commemorated objects and spaces surrounding Confederate inheritances.
Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question. Objective: I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Methods: This article addresses several elements within the purview of the Journal: questions of contemporary society, diversity of opinion, recognition of complexity, subject matter of interest to non-specialists, international relevancy, and history. Drawing from the testimony of scholars and artists, I address the contemporary conceptual landscape of approaches to the presentation and evolving participatory narratives of Confederate monuments that range from absolute expungement and removal to more restrained responses such as in situ re-contextualization, removal to museums, and preservation-in-place. In a new era of social justice surrounding the aftermath of dramatic events such as the 2015 Charleston shooting, the 2017 Charlotteville riot, and the murder of George Floyd, should we see them as symbols of oppression, inviting expungement, or selectively as sites of conscience and reflection, inviting various forms of re-interpretation of tangible and intangible relationships?
I describe monuments as symbols in the “Lost Cause” narrative and their place in enduring Confederate legacies. I make the case, and offer documented examples, that remnants of the monuments, such as the “decorated” pedestals, if not the original towering statues themselves, should be left in place as sites of reflection that can be socially useful in public interpretation as disruptions of space, creating disturbances of vision that can be provocative and didactic. I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Results: I argue that we should see at least some of them as sculptural works of art that invite interpretations of aesthetic and artistic value. I point out how, today, these internationally relevant factors and outcomes of retention vs. removal are engulfed in swirls of social and political contention and controversy within processes of remembering and forgetting and changing public dialogues. Conclusions: Today, all of these potential factors and outcomes, with internationally relevance, are surrounded by swirls of social and political contention and controversy, including the remembering/forgetting dichotomies of cultural heritage. The modern dilemma turns on the question: In today’s new era of social justice, are these monuments primarily symbols of oppression, or can we see them, in select cases, alternatively as sites of conscience and reflection encompassing more inclusive conversations about commemoration? What we save or destroy and assign as the ultimate public value of these monuments rests with how we answer this question.