Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 18, 2020
Date Accepted: Apr 15, 2020
Date Submitted to PubMed: May 22, 2020

The final, peer-reviewed published version of this preprint can be found here:

Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review

Abd-Alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K

Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review

J Med Internet Res 2020;22(6):e18301

DOI: 10.2196/18301

PMID: 32442157

PMCID: 7305563

Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review

  • Alaa Abd-Alrazaq; 
  • Zeineb Safi; 
  • Mohannad Alajlani; 
  • Jim Warren; 
  • Mowafa Househ; 
  • Kerstin Denecke

Background:

Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field.

Objective:

This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots.

Methods:

Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated.

Results:

Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content).

Conclusions:

The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.


 Citation

Please cite as:

Abd-Alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K

Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review

J Med Internet Res 2020;22(6):e18301

DOI: 10.2196/18301

PMID: 32442157

PMCID: 7305563

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.