Accepted for/Published in: JMIR Formative Research
Date Submitted: May 26, 2024
Open Peer Review Period: May 27, 2024 - Jul 22, 2024
Date Accepted: Aug 21, 2024
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Ensuring Accuracy and Equity: A Cross-Language Evaluation of Vaccination Information from ChatGPT and CDC
ABSTRACT
Background:
In the digital age, Large Language Models (LLMs) like ChatGPT have emerged as important sources of healthcare information. Their interactive capabilities offer promise for enhancing health access, particularly for groups facing traditional barriers such as insurance and language constraints. Despite their growing public health use, with millions of medical queries processed weekly, the quality of LLM-provided information remains inconsistent. Prior studies have predominantly assessed ChatGPT’s English responses, overlooking the needs of non-English speakers in the U.S. This study addresses this gap by evaluating the quality and linguistic parity of vaccination information from ChatGPT and the CDC, emphasizing health equity.
Objective:
This research aims to assess the quality and language equity of vaccination information provided by ChatGPT and the CDC in English and Spanish. It highlights the critical need for cross-language evaluation to ensure equitable health information access for all linguistic groups.
Methods:
We conducted a comparative analysis of ChatGPT’s and CDC’s responses to frequently asked vaccination questions in both languages. The evaluation encompassed quantitative and qualitative assessments of accuracy, readability, and understandability. Accuracy was gauged by the perceived level of misinformation, readability by the Flesch-Kincaid score and grade level, and understandability by items from the NIH’s PEMAT instrument.
Results:
The study found that both ChatGPT and CDC provided mostly accurate and understandable responses. However, readability scores often exceeded the American Medical Association’s recommended levels, particularly in English. CDC responses outperformed ChatGPT in readability across both languages. Notably, some Spanish responses appeared to be direct translations from English, leading to unnatural phrasing. The findings underscore the potential and challenges of utilizing ChatGPT for healthcare access.
Conclusions:
ChatGPT holds potential as a health information resource, but requires improvements in readability and linguistic equity to be truly effective for diverse populations. Crucially, the default user experience with ChatGPT, typically encountered by those without advanced language and prompting skills, can significantly shape health perceptions. This is vital from a public health standpoint, as the majority of users will interact with LLMs in their most accessible form. Ensuring that default responses are accurate, understandable, and equitable is imperative for fostering informed health decisions across diverse communities.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.