Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Mar 16, 2020
Date Accepted: Oct 8, 2020

The final, peer-reviewed published version of this preprint can be found here:

Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development

Ammar N, Shaban-Nejad A

Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development

JMIR Med Inform 2020;8(11):e18752

DOI: 10.2196/18752

PMID: 33146623

PMCID: 7673979

An Explainable Artificial Intelligence Recommendation System by Lever-aging the Semantics of Adverse Childhood Experiences (ACEs)

  • Nariman Ammar; 
  • Arash Shaban-Nejad

ABSTRACT

Background:

The Study of Adverse Childhood Experiences (ACES) and their consequences has emerged over the past 20 years. However, while the conclusions from those studies are available, the same is not true of the data. This makes building a training set and developing machine learning models from it a complex problem. Classic Machine Learning (ML) and Artificial Intelligence (AI) techniques lack the ability to provide a full scientific understanding of the inner workings of their underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable AI (XAI) is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas like Medicine by combining ML approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) it has been made. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense" knowledge as well as semantic reasoning and causality models is one solution.

Objective:

In this paper, we leverage XAI to present a proof of concept prototype for a Knowledge-Driven Evidence-Based Recommendation System to improve Mental Health surveillance.

Methods:

We used concepts from an ontology that we have developed to build and train a Question Answering (QA) agent using the Google DialogFlow engine. In addition to the QA agent, the initial prototype includes knowledge graph generation and recommendation components that leverage a third-party graph technology.

Results:

To showcase the framework functionalities we present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, TN. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a Personal Health Library (PHL), and conducting a clinical trial to as-sess both usability and usefulness of the implementation.

Conclusions:

The semantic-driven XAI can enhance health care practitioners’ ability to provide explanations for the decisions they make.


 Citation

Please cite as:

Ammar N, Shaban-Nejad A

Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development

JMIR Med Inform 2020;8(11):e18752

DOI: 10.2196/18752

PMID: 33146623

PMCID: 7673979

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.