Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Speech-Based Cognitive Screening: A Systematic Evaluation of LLM Adaptation Strategies
ABSTRACT
Background:
Over half of U.S. adults with Alzheimer’s disease and related dementias (ADRD) remain undiagnosed. Speech-based screening algorithms offer a scalable approach, but the relative value of large language model (LLM) adaptation strategies is unclear.
Objective:
To compare LLM adaptation strategies for ADRD detection from the DementiaBank speech corpus using both text-only and multimodal models.
Methods:
We analyzed audio-recorded speech from 237 participants and report performance on a held-out test set (n=71). Nine text-only LLMs (3B–405B; open-weight and commercial) and three multimodal audio–text models were evaluated. Adaptations included: (i) in-context learning (ICL) with four demonstration selection policies (most-similar, least-similar, class-centroid/prototype, random); (ii) reasoning-augmented prompting (self-/teacher-generated rationales, self-consistency, Tree-of-Thought with domain experts); (iii) parameter-efficient fine-tuning (token-level vs. added classification head); and (iv) multimodal audio–text integration. The primary outcome was F1 for the cognitively impaired (CI) class; AUC-ROC was reported when available.
Results:
Class-centroid (prototype) demonstrations achieved the highest ICL performance across model sizes (F1 up to 0.81). Reasoning primarily benefited smaller models: teacher-generated rationales increased LLaMA-8B from F1 0.72 to 0.76; expert-role Tree-of-Thought improved its zero-shot score from 0.65 to 0.71. Token-level fine-tuning produced the highest scores (LLaMA 3B: F1=0.83, AUC=0.91; LLaMA 70B: F1=0.83, AUC=0.86; GPT-4o: F1=0.80, AUC=0.87). A classification head markedly improved MedAlpaca 7B (F1 0.06 to 0.82), indicating model-dependent benefits of this approach. Among multimodal models, fine-tuned Phi-4 Multimodal reached F1=0.80 (CI) and 0.75 (cognitively normal) but did not exceed the top text-only systems.
Conclusions:
Detection accuracy is influenced by demonstration selection, reasoning design, and tuning method. Token-level fine-tuning is generally most effective, while a classification head benefits models that perform poorly under token-based supervision. Properly adapted open-weight models can match or exceed commercial LLMs, supporting their use in scalable speech-based ADRD screening. Current multimodal models may require improved audio–text alignment and/or larger training corpora.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.