Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Nov 19, 2024
Date Accepted: May 23, 2025
Evaluation of ChatGPT-4 as a Virtual Outpatient Assistant in Puerperal Mastitis Management: A Content Analysis of Turkish Responses An Observational Study on Mastitis
ABSTRACT
Background:
The integration of artificial intelligence (AI) into clinical workflows holds promise for enhancing outpatient decision-making and patient education. ChatGPT, a large language model developed by OpenAI, has gained attention for its potential to support both clinicians and patients. However, its performance in the outpatient setting of general surgery remains underexplored.
Objective:
This study aimed to evaluate whether ChatGPT-4 can function as a virtual outpatient assistant in the management of puerperal mastitis by assessing the accuracy, clarity, and clinical safety of its responses to frequently asked patient questions in Turkish.
Methods:
Fifteen questions about puerperal mastitis were sourced from public healthcare websites and online forums. These questions were categorized into general information (n=2), symptoms and diagnosis (n=6), treatment (n=2), and prognosis (n=5). Each question was input into ChatGPT-4 (September 3, 2024), and a single Turkish-language response was obtained. The responses were evaluated by a panel consisting of three board-certified general surgeons and two general surgery residents, using five criteria: sufficient length, patient-understandable language, accuracy, adherence to current guidelines, and patient safety. Quantitative metrics included the DISCERN score, Flesch-Kincaid readability score, and inter-rater reliability assessed using intraclass correlation coefficient (ICC).
Results:
ChatGPT’s responses were rated as “excellent” overall, with higher scores for treatment and prognosis-related questions. DISCERN scores showed significant differences between question categories (p=0.014), with treatment questions receiving the highest ratings. Flesch-Kincaid scores indicated readability at a university graduate level. While strong correlations were observed between adherence to literature and patient safety in certain questions, evaluator consistency varied, with significant differences in accuracy (p<0.001).
Conclusions:
ChatGPT demonstrated adequate capability in providing information on puerperal mastitis, particularly for treatment and prognosis. However, evaluator variability and the subjective nature of assessments highlight the need for further optimization of AI tools. Future research should emphasize iterative questioning and dynamic updates to AI knowledge bases to enhance reliability and accessibility.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.