Accepted for/Published in: JMIR Formative Research
Date Submitted: Oct 24, 2024
Open Peer Review Period: Oct 25, 2024 - Dec 20, 2024
Date Accepted: Nov 29, 2024
(closed for review but you can still tweet)
Multimodal Pain Recognition in Postoperative Patients: A Machine Learning Approach
ABSTRACT
Background:
Acute pain management is critical in postoperative care, especially in vulnerable patient populations that may be unable to self-report pain levels effectively. Current methods of pain assessment often rely on subjective patient reports or behavioral pain observation tools, which can lead to inconsistencies in pain management. Multimodal pain assessment, integrating physiological and behavioral data, presents an opportunity to create more objective and accurate pain measurement systems. However, most prior work has focused on healthy subjects in controlled environments, with limited attention to real-world postoperative pain scenarios. This gap necessitates the development of robust, multimodal approaches capable of addressing the unique challenges associated with assessing pain in clinical settings, where factors like motion artifacts, imbalanced label distribution, and sparse data further complicate pain monitoring.
Objective:
To develop and evaluate a multimodal machine learning-based framework for the objective assessment of pain in postoperative patients using biosignals such as electrocardiogram (ECG), electromyogram (EMG), electrodermal activity (EDA), and Respiration Rate signals.
Methods:
The iHurt study was conducted on 25 postoperative patients at the University of California, Irvine Medical Center. The study captured multimodal biosignals during light physical activities, with concurrent self-reported pain levels using the Numerical Rating Scale (NRS). Data preprocessing involved noise filtering, feature extraction, and combining handcrafted (HC) and automatic features through convolutional and long-short-term memory autoencoders. Machine learning classifiers, including Support Vector Machine (SVM), Random Forest (RF), AdaBoost, and K-Nearest Neighbors (KNN), were trained using weak supervision and minority oversampling to handle sparse and imbalanced pain labels. Pain levels were categorized into baseline (BL) and three levels of pain intensity (PL1-3).
Results:
The multimodal pain recognition models achieved an average balanced accuracy of over 80% across the different pain levels. Respiratory rate (RR) models consistently outperformed other single modalities, particularly for lower pain intensities, while facial muscle activity (EMG) was most effective for distinguishing higher pain intensities. Although single-modality models, especially RR, generally provided higher performance compared to multimodal approaches, our multimodal framework still delivered results that surpass previous works in terms of overall accuracy. This suggests that while RR remains a strong modality on its own, the combination of multiple biosignals offers valuable insights and potential improvements for more complex pain recognition tasks in clinical settings.
Conclusions:
This study presents a novel, multimodal machine learning framework for objective pain recognition in postoperative patients. The results highlight the potential of integrating multiple biosignal modalities for more accurate pain assessment, with particular value in real-world clinical settings. Future work should focus on developing personalized models to account for individual variability in pain responses, ultimately improving clinical pain management
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.