Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Mar 2, 2025
Date Accepted: Oct 24, 2025

The final, peer-reviewed published version of this preprint can be found here:

Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models: Quantitative Study Using Large Language Models

Sarwar T, Yepes AJ, Cavedon L, Cavedon L

Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models: Quantitative Study Using Large Language Models

J Med Internet Res 2025;27:e73325

DOI: 10.2196/73325

PMID: 41468574

PMCID: 12811037

Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models: A Quantitative Study Using Large Language Models

  • Tabinda Sarwar; 
  • Antonio Jimeno Yepes; 
  • Lawrence Cavedon; 
  • Lawrence Cavedon

ABSTRACT

Background:

Data collected in controlled settings typically results in high-quality datasets. However, in real-world applications, the quality of data collection is often compromised. It is well-established that the quality of a dataset significantly impacts the performance of machine learning models. In this context, detailed information about individuals is often recorded in progress notes. Given the critical nature of health applications, it is essential to evaluate the impact of textual data quality, as any incorrect prediction can have serious, potentially life-threatening consequences.

Objective:

This study aims to quantify the quality of textual datasets and systematically evaluate the impact of varying levels of errors on feature representation and machine learning models. The primary goal is to determine whether feature representations and machine learning models are tolerant to errors and to assess whether investing additional time and computational resources to improve data quality is justified.

Methods:

A rudimentary error rate metric was developed to evaluate textual dataset quality at the token level. Mixtral Large Language Model (LLM) was used to quantify and correct errors in low-quality datasets. The study analyzed two healthcare datasets: the high-quality MIMIC-III public hospital dataset (for mortality prediction) and a lower-quality private dataset from Australian aged care homes (for depression and fall risk prediction). Errors were systematically introduced into MIMIC at varying rates, while the ACH dataset quality was improved using the LLM. Feature representations and machine learning models were assessed using the area under the receiver operating curve.

Results:

For the sampled 35,774 and 6,336 patients from the MIMIC and ACH datasets respectively, we used Mixtral to introduce errors in MIMIC and correct errors in ACH. Mixtral correctly detected errors in 63% of progress notes, with 17% containing a single token misclassified due to medical terminology. LLMs demonstrated potential for improving progress note quality by addressing various errors. Under varying error rates (5% to 20%, in 5% increments), feature representation performance was tolerant to lower error rates (<10%) but declined significantly at higher rates. This aligned with the ACH dataset's 8% error rate, where no major performance drop was observed. Across both datasets, TF-IDF outperformed embedding features, and machine learning models varied in effectiveness, highlighting that optimal feature representation and model choice depend on the specific task.

Conclusions:

The study revealed that models performed relatively well on datasets with lower error rates (<10%), but their performance declined significantly as error rates increased (≥10%). Therefore, it is crucial to evaluate the quality of a dataset before utilizing it for machine learning tasks. For datasets with higher error rates, implementing corrective measures is essential to ensure the reliability and effectiveness of machine learning models.


 Citation

Please cite as:

Sarwar T, Yepes AJ, Cavedon L, Cavedon L

Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models: Quantitative Study Using Large Language Models

J Med Internet Res 2025;27:e73325

DOI: 10.2196/73325

PMID: 41468574

PMCID: 12811037

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.