Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jun 20, 2023
Open Peer Review Period: Jun 20, 2023 - Aug 15, 2023
Date Accepted: May 23, 2024
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Human-AI Teaming in Intensive Care: A Socio-Technical Systems View and International Delphi Study Among Data Scientists
ABSTRACT
Background:
Artificial intelligence (AI) and machine learning hold immense potential for enhancing clinical and administrative healthcare tasks. However, slow adoption and implementation challenges highlight the need to consider how humans can effectively collaborate with AI within the broader socio-technical system.
Objective:
We aim to explore the optimal utilization of human and AI capabilities by determining suitable levels of human-AI teaming for safely and meaningfully augmenting or automating tasks. We focus on intensive care units (ICUs) as an example and provide recommendations for policymakers and healthcare practitioners regarding AI deployment in healthcare settings.
Methods:
We conducted a systematic task analysis in six ICUs in Europe and carried out an international Delphi survey involving 19 health data scientists from academia and industry (response rate = 95%; 21% female; mean age = 38.6 years; mean experience = 12.63). Consensus was reached on the appropriate level of human-AI teaming for each task (Level 1 = no performance benefits from AI; Level 2 = AI augments human performance; Level 3 = Human augments AI performance; Level 4 = AI performs without human input). Ethical and social implications, as well as control and accountability distribution, were also considered by experts.
Results:
Levels 2 and 3 human-AI teaming were preferred choices for four out of six core ICU tasks. However, this recommendation relies on AI systems providing transparency, predictability, and user control. If these conditions are not met, reducing to Level 1 or shifting accountability away from users is advised. Additionally, when AI demonstrates near-perfect reliability, Level 4 automation can enhance safety and efficiency, especially when human-AI teaming conditions are not met. Importantly, AI experts agree that certain tasks should not be augmented or automated due to ethical and social concerns related to the physician/nurse-patient relationship and the roles of healthcare professionals in the future.
Conclusions:
By considering the socio-technical system and determining appropriate levels of human-AI teaming, our study showcases the potential for improving the safety and effectiveness of AI utilization in ICUs and broader healthcare settings. Regulatory measures should prioritize transparency, predictability, and user control when users bear accountability. Ethical and social implications must be carefully evaluated to ensure effective collaboration between humans and AI, particularly in light of recent advancements in generative AI and large language models.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.