Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 27, 2023
Date Accepted: Apr 3, 2024

The final, peer-reviewed published version of this preprint can be found here:

Does an App a Day Keep the Doctor Away? AI Symptom Checker Applications, Entrenched Bias, and Professional Responsibility

Zawati MH, Lang M

Does an App a Day Keep the Doctor Away? AI Symptom Checker Applications, Entrenched Bias, and Professional Responsibility

J Med Internet Res 2024;26:e50344

DOI: 10.2196/50344

PMID: 38838309

PMCID: 11187504

Does an App a Day Keep the Doctor Away?: AI Symptom Checker Applications, Entrenched Bias, and Professional Responsibility

  • Ma'n H. Zawati; 
  • Michael Lang

ABSTRACT

The growing prominence of Artificial Intelligence (AI) in mobile health has given rise to a distinct subset of applications that provide users with diagnostic information using their inputted health status and symptom information: AI-Powered Symptom Checker Apps (AISympCheck). While these applications may potentially increase access to health care, they raise consequential ethical and legal questions. Two notable concerns are that AI may further entrench pre-existing biases in the healthcare system, and that their ambiguous role with relation to health professionals may generate confusion in their obligations and liability. First, bias entrenchment often originates from the data used to train AI systems, causing the AI to replicate these inequalities through a “garbage in, garbage out” phenomenon. Users of these applications are also unlikely to be demographically representative of the larger population, leading to distorted results. Second, professional accountability poses a substantial challenge given the vast diversity and lack of regulation surrounding the reliability of AISympCheck applications. It is unclear whether these applications should be subject to safety reviews, who is responsible for app-mediated misdiagnosis, and whether these applications ought to be recommended by physicians. With the rapidly increasing number of applications, there remains little guidance available for health professionals. Professional bodies and advocacy organizations have a particularly important role to play in addressing these ethical and legal gaps. Implementing technical safeguards within these applications could mitigate bias, AIs could be trained with primarily neutral data, and applications could be subject to a system of regulation to allow users to make informed decisions. Entrenched bias and professional responsibility, while operating in different ways, are ultimately exacerbated by the unregulated nature of mobile health. This paper examines these concerns using two examples of AISympCheck applications: Babylon and Ada.


 Citation

Please cite as:

Zawati MH, Lang M

Does an App a Day Keep the Doctor Away? AI Symptom Checker Applications, Entrenched Bias, and Professional Responsibility

J Med Internet Res 2024;26:e50344

DOI: 10.2196/50344

PMID: 38838309

PMCID: 11187504

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.