Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jun 28, 2023
Date Accepted: Apr 29, 2024
(closed for review but you can still tweet)
What patients like and dislike about their physicians: Developing and testing a natural-language-processing algorithm to classify social judgments in online physician reviews
ABSTRACT
Background:
Patients increasingly rely on online physician reviews to choose a physician and share their experiences. However, the unstructured text of these online physician reviews presents a challenge for researchers seeking to make inferences about patients’ judgments. Methods previously used to identify patient judgments within reviews, such as hand-coding and dictionary-based approaches, have posed limitations to sample size and classification accuracy. Advanced natural language processing methods can help overcome these limitations and promote further analysis of physician reviews on these popular platforms.
Objective:
We aimed to train, test, and validate an advanced natural language processing algorithm for classifying the presence and valence of two social judgments in online physician reviews: interpersonal manner and technical competence.
Methods:
We sampled 345,053 reviews for 167,150 physicians across the United States from Healthgrades.com, a commercial online physician rating and review website. We hand-coded 2,000 reviews and used those reviews to train and test a transformer classification algorithm called Robustly Optimized BERT Pre-Training Approach (RoBERTa). The two fine-tuned models coded the reviews for the presence and positive or negative valence of patients’ interpersonal manner or technical competence judgments of their physicians. We evaluated the performance of the two models against 200 hand-coded reviews and validated the models using the full sample of 345,053 RoBERTa-coded reviews.
Results:
The interpersonal manner model was 90% accurate with precision of 0.89, recall of 0.90, and weighted F1 score of 0.89. The technical competence model was 90% accurate with precision of 0.91, recall of 0.90, and weighted F1 score of 0.90. Positive-valence judgments were associated with higher review star ratings whereas negative-valence judgments were associated with lower star ratings. Analysis of the data by review rating and physician gender corresponded with findings in prior literature.
Conclusions:
Our two classification models coded patients' interpersonal manner and technical competence judgments in online physician reviews with high precision, recall, and accuracy. These models were validated using review star ratings and results from previous research. RoBERTa can accurately classify unstructured, online review text at scale. Future work could explore the use of this algorithm with other textual data, such as social media posts and electronic health records.
Citation
Request queued. Please wait while the file is being generated. It may take some time.