Artificial intelligence (AI) is beginning to make our roads, construction sites, and manufacturing plants safer and more efficient.1 Why not in our hospitals? Stanford University has formed a Partnership in AI-Assisted Care (PAC) to do just that. Their mission: Build AI-enabled smart hospitals that are safer and take better care of patients.2
The harsh truth is that hospitals are dangerous places. The Institute of Medicine’s landmark book To Err is Humanreported an appalling 98,000 deaths per year resulting from preventable medical errors– “the equivalent of a jumbo jet per day”.3This book released in 1999 served as a catalyst for the US healthcare system, inspiring a new era of surgical checklists, treatment protocols, and electronic medication order entry. Despite two decades of work to improve hospital safety we have fallen far short of patient safety goals.
Is healthcare reaching the limits of human designed and implemented safety interventions? Can human intelligence alone solve this problem?
Stanford researchers and business leaders argue that over-burdened healthcare workers and clinicians are set-up to fail. Human error is pervasive in the complexity of patient care and even the best protocols are vulnerable to behavioral lapses and cognitive shortcuts.4 Stanford leaders have keyed into this human shortcoming and are using a type of AI known as computer vision as a form of “behavioral assistance”.2
The organization started with the simplest (and maybe most costly) of hospital errors – poor hand hygiene. Hand washing have been shown to prevent infections, save lives and a considerable amount of money for the healthcare system4. In 2014 the annual costs of hospital acquired infections were estimated to carry a 10 billion-dollar price tag.5
Currently, organizations deploy “secret shoppers” into hospital wards to collect hand hygiene data and help reinforce hand washing adherence. Stanford has begun automating this process through strategic placement of smart cameras that remotely collect hand hygiene data. Computer vision AI is able to interpret visual data using pattern space analytics, de-identified depth images, and sanitizer dispenser data to detect missed hand hygiene events with greater than 95% accuracy.6 In the near-term, researchers and hospital leaders hope to use this technology as a behavioral nudge – alerting non-compliant clinicians in real-time to wash their hands before entering a patient room.6
In the next decade, the Stanford PAC hopes to expand the use of this technology by putting computer vision inside the patient room. Direct observation using smart cameras can identify patient behaviors and physiologic changes that predict safety events. For instance, in the intensive care unit (ICU), computer vision has already shown potential in automating patient monitoring tasks currently performed by expensive and highly trained nurses and doctors. The Stanford team hopes to use 3D and infrared vision computing to monitor patients’ mobility level, bed-turn frequency, and urinary incontinence to predict and prevent emergencies such as patient falls and devastating pressure ulcers.2
Despite the obvious benefits of AI, managers are likely to face significant adoption barriers from stakeholders at every level: patients, clinicians, even board members. Patients are likely to view cameras in the exam room as an unwanted guest – a significant threat to personal privacy. Patient education will be critical to communicating the safety-value of vision computing and assurances of privacy safeguards (including de-identification of images). Clinicians may view AI-based automation as a professional threat with the potential to usurp their functions in patient care. Managers should position AI as a tool to enhance clinician effectiveness, enabling them to spend more time treating patients and less time collecting data and documenting. Moreover, clinicians buy-in will be critical in providing feedback needed to refine the accuracy computer vision algorithms. Finally, hospital board members and investors will likely question the cost-benefit calculus of deploying expensive AI technology across the enterprise. Beyond the safety benefits, AI has real to actually reduce overall costs by automating expensive monitoring tasks and improving operational efficiency. For example, managers could use computer vision to map clinician workflows and enable time-driven activity-based costing (TDABC) to improve human resource utilization.2,7
As this technology advances, clinicians and hospitals may become reliant on computer vision AI to detect safety concerns. How do we protect the system against human complacency when “the computer is watching”? What can we learn from the airline industry to keep clinicians engaged when their smart hospital is on autopilot?
1J. Wilson, A. Alter, and S. Sachdev. “Business processes are learning to hack themselves”. Harvard Business Review Digital Articles (June 27, 2016)
2Stanford University, “Stanford Partnership in AI Assisted Care,” https://aicare.stanford.edu/index.php, (accessed November 2018).
3Institute of Medicine. To Err is Human – Building a Safer Health System. (November 1999).
4S. Yeung, et al.; “Bedside Computer Vision — Moving Artificial Intelligence from Driver Assistance to Patient Safety”. N Engl J Med 2018; 378:1271-1273.
5E. Zimlichman, et al. “Health care-associated infections: a meta-analysis of costs and financial impact on the US health care system”. JAMA Intern Med; 2013; 173(22):2039-46.
6A. Haque, M. Guo, Alahi A, et al. “Towards vision-based smart hospitals: a system for tracking and monitoring hand hygiene compliance”. Proc Mach Learn Res 2017;68:75-87 (https://arxiv-org).
7Kaplan, Robert S. “Improving Value with TDABC.” hfm (Healthcare Financial Management) 68, no. 6: 76–83. (June 2014)