Accepted for/Published in: JMIR Formative Research
Date Submitted: Jul 1, 2025
Open Peer Review Period: Jul 9, 2025 - Sep 3, 2025
Date Accepted: Nov 24, 2025
(closed for review but you can still tweet)
Empowering Informal Caregivers of Persons with Early-Stage Dementia by Large Language Models: Mixed Methods Evaluation
ABSTRACT
Background:
Acquiring relevant knowledge and support is essential for informal caregivers of persons with early-stage dementia, including awareness, access, and use of comprehensive resources for both persons with dementia and caregiver support. With appropriate strategies and early-stage support, informal caregivers can play a vital role in enhancing the well-being of persons with dementia and potentially slowing their progression. While large language models (LLMs) can provide easy access to caregiving knowledge, the risks, perceived challenges, and ways to improve LLM-generated responses in practice remain underexplored.
Objective:
In this study, we aim to (1) examine the risks and perceived challenges of using a baseline ChatGPT-4o, an internet-accessible artificial intelligence (AI) model, for dementia caregiving support and (2) understand how an enhanced version of ChatGPT-4o, equipped with up-to-date dementia caregiving knowledge, can mitigate these risks and challenges.
Methods:
We compiled 32 representative questions from informal caregivers seeking guidance on early-stage dementia. We developed two ChatGPT-4o conditions: C1, the publicly available baseline model, and C2, an experimental version enhanced through prompt engineering and grounded in a conceptual framework—drawn from health science and gerontology literature—to empower caregivers of individuals with early-stage dementia. Using these conditions, we generated 64 responses (32 pairs) to the questions. Twelve experts evaluated them with validated tools assessing accuracy, reasoning, clarity, usefulness, trust, satisfaction, safety, harm, and relevance. A Mann–Whitney U test compared the conditions. After the survey, we conducted interviews to explore experts’ perceived differences, remaining challenges, and design opportunities. Interviews were transcribed and analyzed using descriptive thematic analysis.
Results:
Responses in C2 showed significant improvements in three criteria—actionability, relevance, and perceived satisfaction—compared to C1. However, no significant differences were found in the remaining five: response accuracy, the model’s ability to understand the question, intelligibility, trustworthiness, response safety, and perceived harm. Qualitative analysis of interviews revealed two key insights: differences between baseline and experimental responses, and possible reasons for these differences. Twelve experts evaluated wordiness, detail, empathy, satisfaction, accuracy, relevance, and bias. Both models were considered somewhat verbose, but the experimental model’s responses were viewed as more detailed, relevant, and actionable. Accuracy appeared similar across models, yet participants reported greater satisfaction with the experimental model’s outputs.
Conclusions:
Results indicate that both conditions generated responses perceived as reasonable and intelligible. However, the experimental model offered more relevant, practical guidance on caregiving needs, providing specific information aligned with the 32 testing questions and actionable recommendations. This led to higher perceived satisfaction compared to the baseline model.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.