Accepted for/Published in: JMIR Human Factors
Date Submitted: Feb 19, 2025
Date Accepted: Apr 7, 2025
Date Submitted to PubMed: Apr 7, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study
ABSTRACT
Background:
Large language models (LLMs) increasingly serve as interactive healthcare resources, yet user acceptance remains underexplored.
Objective:
This study examines how ease of use, perceived usefulness, trust, and risk perception interact to shape intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes.
Methods:
We adapted survey items from validated technology acceptance scales to assess DeepSeek’s functionality, focusing on constructs such as trust, intent to use for health, ease of use, perceived usefulness, and risk perception. The final 12-item questionnaire (using a four-point forced Likert scale) was pilot-tested (n=20) for clarity and consistency. It was then distributed online to users in India, United Kingdom (UK), and United States of America (USA) who had used DeepSeek within the past two weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate the measurement and structural models. Structural equation modeling assessed direct and indirect effects, including potential quadratic relationships.
Results:
A total of 556 complete responses were collected, with respondents almost evenly split across India (n=184), the UK (n=185), and the United States (n=187). Regarding AI in healthcare, when asked if they were comfortable with their healthcare provider using AI tools, 59% (n=330) were fine with AI use provided their doctor verified its output, and 31% (n=175) were enthusiastic about its use without conditions. In terms of large language model (LLM) usage over the last six months, 25% (n=140) used them once a month, 44% (n=243) every week, 20% (n=113) almost daily, and 11% (n=60) multiple times daily. Specifically, for DeepSeek, 33% (n=183) used it monthly, 28% (n=156) weekly, 25% (n=137) more than once per week, and 14% (n=80) almost every day. Its primary applications included academic and educational purposes (55%, n=308), functioning as a search engine (51%, n=282), and addressing health-related queries (48%, n=265). When asked about their intent to adopt DeepSeek over other LLMs like ChatGPT, 52% (n=290) were likely to switch, and 29% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects. The measurement model demonstrated strong reliability and validity, supported by high composite reliabilities, average variance extracted, and discriminant validity measures.
Conclusions:
These findings extend technology acceptance and health informatics research by illuminating the multifaceted nature of user adoption in sensitive domains. Stakeholders should invest in trust-building strategies, user-centric design, and risk mitigation measures to encourage sustained and safe uptake of LLMs in healthcare. Future work can employ longitudinal designs or examine culture-specific variables to clarify how user perceptions evolve over time and across different regulatory environments. Such insights are critical for harnessing AI to enhance outcomes.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.