Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Formative Research

Date Submitted: Jan 27, 2025
Open Peer Review Period: Feb 3, 2025 - Mar 31, 2025
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Comparing AI Chatbots and Traditional Medical Sources for Hysterectomy Patient Education: A Study on Professionalism, Readability, and Patient Education Quality

  • guanghua zhou

ABSTRACT

Background:

This study compared the professionalism, readability, and patient education quality of AI-generated responses (ChatGPT and Gemini) with the American Society of Anesthesiologists (ASA) website for eight frequently asked hysterectomy questions

Objective:

To compare the differences in professionalism, readability, and patient education quality between AI (ChatGPT and Gemini) and the American Society of Anesthesiologists (ASA) website when answering eight common hysterectomy questions, and to evaluate whether AI - generated content can serve as a reliable source of patient education for hysterectomy.

Methods:

Blinded experts evaluated professionalism, while six readability indices and the Patient Education Materials Assessment Tool (PEMAT) were used to assess content quality. Statistical comparisons were performed with p < 0.05 considered significant.

Results:

ChatGPT and Gemini demonstrated significantly higher professionalism scores than the ASA website (p < 0.05), but their readability was lower (p < 0.05). There were no significant differences in professionalism or readability between ChatGPT and Gemini (p > 0.05). Although AI-generated responses aligned with clinical guidelines, limited readability remains a concern.

Conclusions:

AI-driven content provides professional and accurate patient education on hysterectomy. However, further refinements are needed to improve accessibility without compromising quality. Clinical Trial: No patient personal information, clinical data, or health records were involved; therefore, ethical committee approval was not required.


 Citation

Please cite as:

zhou g

Comparing AI Chatbots and Traditional Medical Sources for Hysterectomy Patient Education: A Study on Professionalism, Readability, and Patient Education Quality

JMIR Preprints. 27/01/2025:71842

DOI: 10.2196/preprints.71842

URL: https://preprints.jmir.org/preprint/71842

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.