Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Jan 23, 2026
Open Peer Review Period: May 12, 2026 - Jul 12, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large language models display human-like social desirability biases in health screening questionnaires: A Comparative Study of Four Models

  • Andrew O'Malley

ABSTRACT

Background:

Large language models are increasingly used in health research and clinical education. However, they may exhibit social desirability bias (SDB), a tendency to present overly favourable responses when evaluation is inferred.

Objective:

The objective of this study was to quantify the presence and extent of social desirability bias in large language models across mental health and lifestyle screening instruments by evaluating the impact of contextual awareness on model outputs.

Methods:

Four OpenAI models (GPT-4o, GPT-4o-mini, o1, o1-mini) were tested on four instruments sensitive to SDB: GAD-7, PHQ-9, AUDIT, and FANTASTIC. Each model completed 100 "naïve" trials (single items presented independently) and 100 "informed" trials (full questionnaires, revealing evaluative context). Responses were compared to human normative baselines using z-tests. Differences between naïve and informed responses were assessed using t-tests and reported in units of human standard deviations (SDs).

Results:

All models deviated significantly from human norms on at least two instruments (p<0.001). Informed prompts consistently produced more socially desirable outputs. Mean PHQ-9 and GAD-7 scores dropped by up to 11.9 points (1.3 SD); AUDIT and FANTASTIC scores shifted toward "healthier" profiles. Smaller models showed greater SDB on mental health tools, whereas larger reasoning models showed greater SDB on lifestyle questionnaires.

Conclusions:

Large language models systematically under-report stigmatised symptoms and behaviours when they detect evaluation. While this mimics human patient behaviour and may potentially aid high-fidelity simulation, it may introduce significant systematic error when models are used for objective data processing or synthetic cohort generation.


 Citation

Please cite as:

O'Malley A

Large language models display human-like social desirability biases in health screening questionnaires: A Comparative Study of Four Models

JMIR Preprints. 23/01/2026:92057

URL: https://preprints.jmir.org/preprint/92057

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.