Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR AI

Date Submitted: Jan 26, 2025
Date Accepted: Oct 30, 2025

The final, peer-reviewed published version of this preprint can be found here:

Rethinking AI Workflows: Guidelines for Scientific Evaluation in Digital Health Companies

McAlister KL, Gonzales L, Huberty J

Rethinking AI Workflows: Guidelines for Scientific Evaluation in Digital Health Companies

JMIR AI 2025;4:e71798

DOI: 10.2196/71798

PMID: 41343771

PMCID: 12677877

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Rethinking AI Workflows: Guidelines for Scientific Evaluation in Digital Health Companies

  • Kelsey Lynn McAlister; 
  • Lee Gonzales; 
  • Jennifer Huberty

ABSTRACT

Artificial intelligence (AI) is revolutionizing digital health, driving innovation in care delivery and operational efficiency. Despite its potential, many AI systems fail to meet real-world expectations due to limited evaluation practices that focus narrowly on short-term metrics like efficiency and technical accuracy. Ignoring factors such as usability, trust, transparency, and adaptability hinders AI adoption, scalability, and long-term impact in health care. This paper emphasizes the importance of embedding scientific evaluation as a core operational layer throughout the AI lifecycle. We outline practical guidelines for digital health companies to improve AI integration and evaluation, informed by over 35 years of experience in science, the digital health industry, and AI development. It describes a multi-step approach, including stakeholder analysis, real-time monitoring, and iterative improvement, that digital health companies can adopt to ensure robust AI integration. Key recommendations include assessing stakeholder needs, designing AI systems that can check its own work, conducting testing to address usability and biases, and ensuring continuous improvement to keep systems user-centered and adaptable. By integrating these guidelines, digital health companies can improve AI reliability, scalability, and trustworthiness, driving better health care delivery and stakeholder alignment.


 Citation

Please cite as:

McAlister KL, Gonzales L, Huberty J

Rethinking AI Workflows: Guidelines for Scientific Evaluation in Digital Health Companies

JMIR AI 2025;4:e71798

DOI: 10.2196/71798

PMID: 41343771

PMCID: 12677877

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.