Stanford Health’s Vision of a “Smart Hospital”

Creating smart hospitals using artificial intelligence

Artificial intelligence (AI) is beginning to make our roads, construction sites, and manufacturing plants safer and more efficient.1 Why not in our hospitals?  Stanford University has formed a Partnership in AI-Assisted Care (PAC) to do just that. Their mission: Build AI-enabled smart hospitals that are safer and take better care of patients.2

The harsh truth is that hospitals are dangerous places. The Institute of Medicine’s landmark book To Err is Humanreported an appalling 98,000 deaths per year resulting from preventable medical errors– “the equivalent of a jumbo jet per day”.3This book released in 1999 served as a catalyst for the US healthcare system, inspiring a new era of surgical checklists, treatment protocols, and electronic medication order entry. Despite two decades of work to improve hospital safety we have fallen far short of patient safety goals.

Is healthcare reaching the limits of human designed and implemented safety interventions? Can human intelligence alone solve this problem?

Stanford researchers and business leaders argue that over-burdened healthcare workers and clinicians are set-up to fail. Human error is pervasive in the complexity of patient care and even the best protocols are vulnerable to behavioral lapses and cognitive shortcuts.Stanford leaders have keyed into this human shortcoming and are using a type of AI known as computer vision as a form of “behavioral assistance”.2

The organization started with the simplest (and maybe most costly) of hospital errors – poor hand hygiene. Hand washing have been shown to prevent infections, save lives and a considerable amount of money for the healthcare system4. In 2014 the annual costs of hospital acquired infections were estimated to carry a 10 billion-dollar price tag.5

Currently, organizations deploy “secret shoppers” into hospital wards to collect hand hygiene data and help reinforce hand washing adherence. Stanford has begun automating this process through strategic placement of smart cameras that remotely collect hand hygiene data. Computer vision AI is able to interpret visual data using pattern space analytics, de-identified depth images, and sanitizer dispenser data to detect missed hand hygiene events with greater than 95% accuracy.6 In the near-term, researchers and hospital leaders hope to use this technology as a behavioral nudge – alerting non-compliant clinicians in real-time to wash their hands before entering a patient room.6

In the next decade, the Stanford PAC hopes to expand the use of this technology by putting computer vision inside the patient room. Direct observation using smart cameras can identify patient behaviors and physiologic changes that predict safety events. For instance, in the intensive care unit (ICU), computer vision has already shown potential in automating patient monitoring tasks currently performed by expensive and highly trained nurses and doctors. The Stanford team hopes to use 3D and infrared vision computing to monitor patients’ mobility level, bed-turn frequency, and urinary incontinence to predict and prevent emergencies such as patient falls and devastating pressure ulcers.2

Despite the obvious benefits of AI, managers are likely to face significant adoption barriers from stakeholders at every level: patients, clinicians, even board members. Patients are likely to view cameras in the exam room as an unwanted guest – a significant threat to personal privacy. Patient education will be critical to communicating the safety-value of vision computing and assurances of privacy safeguards (including de-identification of images). Clinicians may view AI-based automation as a professional threat with the potential to usurp their functions in patient care. Managers should position AI as a tool to enhance clinician effectiveness, enabling them to spend more time treating patients and less time collecting data and documenting. Moreover, clinicians buy-in will be critical in providing feedback needed to refine the accuracy computer vision algorithms. Finally, hospital board members and investors will likely question the cost-benefit calculus of deploying expensive AI technology across the enterprise. Beyond the safety benefits, AI has real to actually reduce overall costs by automating expensive monitoring tasks and improving operational efficiency. For example, managers could use computer vision to map clinician workflows and enable time-driven activity-based costing (TDABC) to improve human resource utilization.2,7

As this technology advances, clinicians and hospitals may become reliant on computer vision AI to detect safety concerns. How do we protect the system against human complacency when “the computer is watching”? What can we learn from the airline industry to keep clinicians engaged when their smart hospital is on autopilot?

(732 words)

 1J. Wilson, A. Alter, and S. Sachdev. “Business processes are learning to hack themselves”. Harvard Business Review Digital Articles (June 27, 2016)

 2Stanford University, “Stanford Partnership in AI Assisted Care,” https://aicare.stanford.edu/index.php, (accessed November 2018).

3Institute of Medicine. To Err is Human – Building a Safer Health System. (November 1999).

4S. Yeung, et al.; “Bedside Computer Vision — Moving Artificial Intelligence from Driver Assistance to Patient Safety”. N Engl J Med 2018; 378:1271-1273.

5E. Zimlichman, et al. “Health care-associated infections: a meta-analysis of costs and financial impact on the US health care system”. JAMA Intern Med; 2013; 173(22):2039-46.

6A. Haque, M. Guo, Alahi A, et al. “Towards vision-based smart hospitals: a system for tracking and monitoring hand hygiene compliance”. Proc Mach Learn Res 2017;68:75-87 (https://arxiv-org).

7Kaplan, Robert S. “Improving Value with TDABC.” hfm (Healthcare Financial Management) 68, no. 6: 76–83. (June 2014)

Previous:

Volkswagen Hits the Road with the Adoption of 3D Printing

Next:

3D Printing and Humanitarian Aid, Will the US Learn from Past Disastrous Response?

Student comments on Stanford Health’s Vision of a “Smart Hospital”

  1. Thanks for sharing Alec. This is super interesting, and the 98,000 deaths per year from preventable medical errors is a shocking number. If AI can reduce that number, that would be fantastic. I think like most businesses, once positive results begin coming in on a larger scale, momentum will build and the process will accelerate.

    Also, one question I’m curious about: Doctors are notorious for working crazy shifts and hours (at least the younger ones I know). I’ve always wondered if that leads to a higher error rate. Do you know of any hospitals that have experimented with shorter shifts? Were there results different?

    Thanks again for sharing!

  2. Nice piece Alec, very interesting topic. I think it will be very interesting to see whether this technology gets used to improve patient outcomes or reduce costs, or whether the two are inextricably linked, and what the economics involved look like. In the hand washing example, the technology requires investment while the incentivized action has no marginal cost and ultimately reducing infections has the $10B price tag. But who does the cost of the infection affect? The hospital itself or insurers? I think as that is borne out it will be interesting to see what technology get implemented and by whom.

  3. Thanks, Alec. This is really interesting! As a patient, I am encouraged by all the potential safety benefits that computer vision offers. I can see how the privacy issues present a significant short term concern, but have to imagine that in the medium term people will get used to the technology and will positively weigh the safety benefits against the privacy costs. I do wonder though, much as with autonomous vehicles and the app (Natural Cycles) that I wrote about, what the potential reaction to a failure in judgement on the part of the algorithm would be. This raises a number of questions about the future that I’m sure Stanford is thinking about. For example, as it evolves to monitor more than just hand washing, but also more physiological factors, what will be the threshold of error that we require? And, as we discussed in the IBM case, what will be the threshold for decisions made autonomously versus decisions that need to be human aided?

  4. This is a great read Alec. I imagine that in the future there will be an additional list of metrics that each physician and nurse can evaluate that is recorded on these AI cameras. We already see cameras used in patient care in Neurology when trying a record a patient’s seizures. Just as we look at BMPs or an ECGs, so too I can see us analyzing AI camera metrics such as the amount of sleep that a patient got, if a patient rotated enough times to prevent a pressure ulcer or even a recording of an episode of delirium that a healthcare provider did not witness. These findings can then help influence clinical decision making and one can even imagine the system then making recommendations based upon these findings.

Leave a comment