Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Nov 9, 2023
Open Peer Review Period: Nov 9, 2023 - Jan 4, 2024
Date Accepted: Mar 10, 2024
(closed for review but you can still tweet)
Quality, Accuracy and Reproducibility of Publicly-Available ChatGPT-4-Generated Documentation For Generation of Medical Notes: A Proof-of-Concept Study
ABSTRACT
Background:
Medical documentation plays a crucial role in clinical practice, facilitating accurate patient management and communication among healthcare professionals. However, inaccuracies in medical notes can lead to miscommunication and diagnostic errors. Additionally, the demands of documentation contribute to physician burnout. While intermediaries like medical scribes and speech recognition software have been used to ease this burden, they have limitations in terms of accuracy and addressing provider-specific metrics. The integration of ambient AI-powered solutions offers a promising way to improve documentation while fitting seamlessly into existing workflows.
Objective:
This study aims to assess the accuracy and quality of SOAP (Subjective, Objective, Assessment, and Plan) notes generated by ChatGPT-4, an AI model, using established transcripts of History and Physicals (H&Ps). We seek to identify potential errors and evaluate the model's performance across different categories.
Methods:
We conducted simulated patient-provider encounters representing various ambulatory specialties and transcribed the audio files. Key reportable elements were identified, and ChatGPT-4 was used to generate SOAP notes based on these transcripts. Three versions of each note were created, and errors were categorized as omissions, incorrect information, or additions. We compared the accuracy of data elements across versions, transcript length, and data categories. Additionally, we assessed note quality using the Physician Documentation Quality Instrument (PDQI) scoring system.
Results:
While ChatGPT-4 consistently generated SOAP-style notes, there were, on average, 23.6 errors per clinical case, with errors of omission being the most common (86%), followed by addition errors (10.5%) and inclusion of incorrect facts (3.2%). There was significant variance between replicates of the same case with only 52.9% of data elements reported correctly across all 3 replicates. The accuracy of data elements varied across cases, with the highest accuracy observed in the objective section. Consequently, measure of note quality, as assessed by PDQI demonstrated with intra and inter-case variance. Finally, the accuracy of ChatGPT-4 was inversely correlated to both the transcript length (P=0.003) and number of scorable data elements (P=0.003).
Conclusions:
Our study reveals substantial variability in errors, accuracy, and note quality generated by ChatGPT-4. Errors were not limited to specific sections, and the inconsistency in error types across replicates complicates predictability. Transcript length and data complexity inversely correlated with note accuracy, raising concerns about the model's effectiveness in handling complex medical cases. The quality and reliability of AI-generated clinical notes produced by ChatGPT-4 do not meet the standards required for clinical use. While AI holds promise in healthcare, caution should be exercised before widespread adoption. Further research is needed to address the issues of accuracy, variability, and potential errors. ChatGPT-4, while a valuable tool in various applications, should not be considered a safe alternative to human-generated clinical documentation at this time.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.