Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIRx Med

Date Submitted: Sep 2, 2025
Open Peer Review Period: Sep 3, 2025 - Oct 24, 2025
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Interactive Evaluation of an Adaptive-Questioning Symptom Checker Using Standardized Clinical Vignettes

  • Prathima Madda; 
  • Jagadeesh Kondru

ABSTRACT

Background:

Digital symptom checkers are widely used to guide care-seeking, yet evidence for triage safety in realistic, multi-turn use is limited. Prior evaluations often simulate single-pass inputs and rarely assess whether tools elicit the key clinical features needed for safe recommendations. Rigorous, interactive evaluations that quantify triage performance alongside the quality and burden of history-taking are needed.

Objective:

To evaluate the triage performance and history-taking quality of an adaptive-questioning symptom checker (CareRoute) using an interactive protocol on standardized clinical vignettes (Semigran et al., BMJ 2015; 45 cases).

Methods:

Each session began with only the presenting complaint; CareRoute asked follow-up questions adaptively, and the evaluator answered concisely per the vignette. At the end of questioning, CareRoute issued a triage recommendation. We compared CareRoute’s issued triage with the reference triage and computed history-taking quality from normalized features derived from each vignette’s Condensed Format. History-taking quality comprised (i) elicitation coverage—the percentage of a vignette’s normalized features obtained through questioning, and (ii) elicitation fraction—the proportion of surfaced normalized features (elicited or volunteered) that were obtained through questioning. Primary outcomes were triage concordance and history-taking quality; the secondary outcome was user burden (time spent answering questions). We did not evaluate possible diagnoses, though CareRoute issues them.

Results:

Exact 3-tier triage concordance was 88.9% (40/45; 95% CI 76.5–95.2%). Elicitation coverage had a median of 67% (IQR 60–71%), and elicitation fraction had a median of 70% (IQR 62–75%). CareRoute asked a median of 19 questions overall (IQR 16–20), with urgency-conditioned questioning: Emergency Care median 10 questions (IQR 4–14), Doctor Visit median 19 questions (IQR 18–20), Self Care median 19 questions (IQR 17–20).

Conclusions:

In an interactive, vignette-constrained evaluation starting from only the presenting complaint, CareRoute achieved high 3-tier triage concordance (88.9%) with no under-triage on Emergency-reference vignettes, while eliciting most normalized features (median elicitation coverage 67%; median elicitation fraction 70%) with acceptable user burden via urgency-conditioned questioning.


 Citation

Please cite as:

Madda P, Kondru J

Interactive Evaluation of an Adaptive-Questioning Symptom Checker Using Standardized Clinical Vignettes

JMIR Preprints. 02/09/2025:83429

DOI: 10.2196/preprints.83429

URL: https://preprints.jmir.org/preprint/83429

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.