Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Mar 2, 2025
Date Accepted: Oct 24, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models
ABSTRACT
Background:
Data collected in controlled settings typically results in high-quality datasets. However, in real-world applications, the quality of data collection is often compromised. It is well-established that the quality of a dataset significantly impacts the performance of machine learning models. In this context, detailed information about individuals is often recorded in progress notes. Given the critical nature of health applications, it is essential to evaluate the impact of textual data quality, as any incorrect prediction can have serious, potentially life-threatening consequences.
Objective:
This study aims to quantify the quality of textual datasets and systematically evaluate the impact of varying levels of errors on feature representation and machine learning models. The primary goal is to determine whether feature representations and machine learning models are tolerant to errors and to assess whether investing additional time and computational resources to improve data quality is justified.
Methods:
A rudimentary error rate metric was developed to evaluate textual dataset quality at the token level. Mixtral Large Language Model (LLM) was used to quantify and correct errors in low-quality datasets. The study analyzed two healthcare datasets: the high-quality MIMIC-III public hospital dataset (for mortality prediction) and a lower-quality private dataset from Australian aged care homes (for depression and fall risk prediction). Errors were systematically introduced into MIMIC at varying rates, while the ACH dataset quality was improved using the LLM. Feature representations and machine learning models were assessed using the area under the receiver operating curve.
Results:
For the sampled 35,774 and 6,336 patients from the MIMIC and ACH datasets respectively, we used Mixtral to introduce errors in MIMIC and correct errors in ACH. Mixtral correctly detected errors in 63% of progress notes, with 17% containing a single token misclassified due to medical terminology. LLMs demonstrated potential for improving progress note quality by addressing various errors. Under varying error rates (5% to 20%, in 5% increments), feature representation performance was tolerant to lower error rates (<10%) but declined significantly at higher rates. This aligned with the ACH dataset's 8% error rate, where no major performance drop was observed. Across both datasets, TF-IDF outperformed embedding features, and machine learning models varied in effectiveness, highlighting that optimal feature representation and model choice depend on the specific task.
Conclusions:
The study revealed that models performed relatively well on datasets with lower error rates (<10%), but their performance declined significantly as error rates increased (≥10%). Therefore, it is crucial to evaluate the quality of a dataset before utilizing it for machine learning tasks. For datasets with higher error rates, implementing corrective measures is essential to ensure the reliability and effectiveness of machine learning models.
Citation