Accepted for/Published in: Interactive Journal of Medical Research
Date Submitted: Mar 1, 2023
Date Accepted: Jul 27, 2023
Evaluation of the Appropriateness and Comprehensiveness of perioperative patient education in Thoracic Surgery using ChatGPT in different language contexts: A pilot study
ABSTRACT
Background:
The release of a dialogue-based artificial intelligence language model called chatGPT (https://openai.com/blog/chatgpt/) has garnered global attention.An exploratory study published by The Journal of the American Medical Association (JAMA) demonstrated the potential of interactive AI to assist clinical workflows by augmenting patient education and patient-clinician communication.
Objective:
This study aimed to evaluate the appropriateness and comprehensiveness of perioperative patient education in Thoracic Surgery using chatGPT in different language contexts (English and Chinese).
Methods:
This pilot study was conducted in February 2023. A total of 37 questions that focused on perioperative patient education in the context of thoracic surgery was formulated. For each question, two inquiries were made to chatGPT, one in English and the other in Chinese, and all responses were documented. The two sets of responses were evaluated separately in the following two aspects by experienced thoracic surgical clinicians: appropriateness and comprehensiveness. Responses were labeled as Y (yes) if deemed appropriate or comprehensive based on a hypothetical draft response to a patient's question on the electronic information platform, and as N (no) if not. The unpaired, χ2 test or the Fisher exact test was used to assess differences in distributions between the categorical variables studied.
Results:
A total of 35 reviewers participated in this study. Twenty-four of these reviewers assessed the English responses, and all reviewers assessed the Chinese responses. Thirty-four (91.9%) responses were qualified both in English and Chinese contexts, while the remaining 3 responses (8.1%) were unqualified in both contexts.There was no statistically significant difference (91.9% vs 91.9%) in the qualification rate between the two sets.
Conclusions:
In summary, while ChatGPT can be an effective resource for delivering general medical knowledge, its use for tailored medical guidance or diagnostic purposes should be approached with caution. It is always recommended to consult with a licensed healthcare professional for accurate and up-to-date medical information and advice.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.