Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Dec 16, 2024
Open Peer Review Period: Jan 23, 2025 - Mar 20, 2025
Date Accepted: Jun 17, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Evaluating the Prototype of a Clinical Decision Support System in Primary Care: Qualitative Study

Köhler SM, Holtz S, Neff MC, Schaaf J, von Wagner M, Müller BS, Schütze D

Evaluating the Prototype of a Clinical Decision Support System in Primary Care: Qualitative Study

JMIR Form Res 2025;9:e69875

DOI: 10.2196/69875

PMID: 40835411

PMCID: 12367354

Evaluating the Prototype of a Clinical Decision Support System in Primary Care: A Qualitative Study

  • Susanne M. Köhler; 
  • Svea Holtz; 
  • Michaela C. Neff; 
  • Jannik Schaaf; 
  • Michael von Wagner; 
  • Beate S. Müller; 
  • Dania Schütze

ABSTRACT

Background:

General practitioners are confronted with a wide variety of diseases and sometimes diagnostic uncertainty. Clinical decision support systems could be valuable to improve diagnosis, especially for unusual manifestations of diseases and rare diseases. Existing tools are not adapted to the requirements and workflow in the primary setting. In the project SATURN (Smart physician portal for patients with unclear disease), the prototype of a clinical decision support system based on artificial intelligence is being developed together with users specifically for primary care in Germany. It focuses on three medical fields. A user-centered design approach is applied for prototype development and evaluation.

Objective:

This study evaluates the usability of a high-fidelity prototype and explores aspects of user experience like the subjective impression, satisfaction and areas of improvement.

Methods:

Five general practitioners participated in the evaluation of the prototype which consisted of (1) a remote think-aloud test, (2) a post-session interview, and (3) a survey with the System Usability Scale. All three parts were consecutively carried out in individual remote sessions. During the think-aloud tests, which were video- and audiotaped, the participants verbalized their thoughts and actions and had to solve several tasks which were based on a primary care case vignette. Remarkable observations were logged, transcribed with quotes, and analyzed for usability problems and positive findings. All observations and interview responses were deductively assigned to the following categories: (1) Content, (2) Comprehensibility, (3) User-friendliness, (4) Layout, (5) Feedback, (6) Navigation. Usability problems were described in detail and solutions for improvement proposed. Median and total scores were calculated for all questionnaire items.

Results:

The evaluation detected both strengths and areas for improvement. Key issues identified were content-related limitations, such as the inability to enter unlisted symptoms, medications, and examination findings in the dropdown menus. Participants also found the terminology for laboratory values did not match their day-to-day vocabulary, as common abbreviations were not recognized. Suggestions for improving the content of the system were also made and included adding symptom duration, weighting symptoms, and incorporating hereditary factors. Another key issue was a lack of user-friendliness concerning the time required to input medication plans and lab values. This aspect was criticized for being cumbersome, with participants expressing a need for faster data entry, potentially through direct imports from practice management systems or laboratory files. Despite these challenges, participants praised other aspects of user-friendliness (use of stored diagnoses and symptoms) and navigation (top navigation bar), and particularly liked the clear and well-structured layout. Overall, the SATURN prototype was deemed useful and promising for future clinical use, despite the need for further refinements, particularly in the areas of data entry, as this is a key obstacle to its use.

Conclusions:

The usability evaluation methods combined proved to be location independent and easy to use, and were apt to detect usability problems in detail. They provided important findings on usability issues and improvements that will be implemented in a second high-fidelity prototype, which will also be tested by users. Technically demanding user requirements, such as direct data transfer from the practice management system and entry options that require complex data models were beyond the scope of this project. However, they should be considered in future CDSS-development projects.


 Citation

Please cite as:

Köhler SM, Holtz S, Neff MC, Schaaf J, von Wagner M, Müller BS, Schütze D

Evaluating the Prototype of a Clinical Decision Support System in Primary Care: Qualitative Study

JMIR Form Res 2025;9:e69875

DOI: 10.2196/69875

PMID: 40835411

PMCID: 12367354

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.