The Limits of Autonomy

Don't hold your breath for autonomous vehicles. They're much farther out than we expect

The race for self-driving cars began more than a decade ago in the California desert when 11 teams competed in the DARPA Urban Challenge [1].  Since then, investors have piled on and invested billions into autonomous vehicles. As of September 2018, $3.5B had been invested in autonomous tech startups [2]. Between 2009 and 2015, Google/Waymo alone spent $1.1B on autonomous tech [3]. Much of that has gone to incorporating deep learning, a combination of machine learning and artificial intelligence across the product development lifecycle . Despite large scale investment, Waymo and others face real obstacles before deep learning is good enough for cars to drive themselves on any road, in any conditions, without a human present, otherwise known as level 5 autonomy [4].

At Waymo, it all starts with the data. The car relies on a combination of maps and sensors to gather data while algorithms run in the background and help the car maneuver (change lanes, speed up, etc.) [5].  Waymo also constantly experiments with different machine learning models to improve performance. In 2012, Waymo relied on neural networks, whereby a computer “learns to perform some task by analyzing training samples”, which must be labeled by humans [6]. Surprised by the poor performance, engineers discovered that humans were incorrectly labeling the training data. Instead of trying to solve the fundamental problem that humans make mistakes, they reintroduced traditional machine learning concepts like decision trees to “get the best of both worlds . [7]”

Deep learning models are only as good as the data provided, and Waymo needs orders of magnitude more data to continue improving. By virtually testing upwards of $8M miles per day, it can focus on interactions that may not occur too often in the real world (like a human dressed as an animal), identify where algorithms need improvements, and iterate accordingly. “The cycle that would take us weeks in the early days of the program is now in the order of minutes,” according to Dolgov, a software lead at Waymo [8].

Despite these advances, Waymo (and others) face a series of short-term and long-term challenges. In the short-term, autonomous vehicles need to be able to handle inclement weather, ie snow. Snow can either block or confuse the car’s sensors, confusing the models and sending the car into a standstill. In response, Waymo opened a test facility in Michigan in 2016 [9] and recently demonstrated that machine learning can “filter out snow” and “see just what’s on the road. [10]”

Snow is indicative of a medium-term problem that Waymo struggles to deal with, specifically that of generalization. To reach level 5 autonomy, cars must be able to identify an infinite number of images and react accordingly. Recent research indicates that conventional deep learning is uniquely bad at this. Small changes to the same image can lead to vastly different predictions [11]. If some accidents are inherently unpredictable, and autonomous vehicles can’t account for the universe of possible outcomes, then Waymo may not eliminate as many fatalities as they would like the industry to believe [12]. Waymo continues to attack this problem with brute force, hoping that tens of thousands of simulations can account for most, if not, all of these scenarios [13].

In addition to simulating a variety of scenarios, Waymo may need to fundamentally augment its current approach to deep learning. First, it needs to model how human drivers, pedestrians, and cyclists will respond to the presence of self-driving cars. Current simulations assume that human drivers will act the same as they did before, potentially introducing a fatal flaw into the models. If real world inputs are different from the data Waymo trained against, autonomous vehicles may not be as safe as we expect them to be.

Second, the cars need to be able to interact with others on the road. Drivers often use non-verbal cues to help other drivers and pedestrians stay safe. Waymo needs to increase its focus on “human-in-the-loop” interaction and make sure that deep learning can both understand the needs of another party and visually cue accordingly. Others in the industry recognize this difficulty as well. The CEO of Argo.ai, a Waymo competitor, mentioned that “We must build algorithms that enable our autonomous vehicles to respond to a deeper understanding of the likely behavior of other road users. [14]”

While these are solvable problems, a number of open questions remain. How does deep learning account for ethics? If a self-driving car has to choose between an accident involving 5 bystanders versus a single passenger, how should it respond [15]? What is the role of government? Should it set and enforce new safety standards? Waymo and others have promised large-scale deployments of autonomous vehicles over the next few years, but they have to deal with serious technical, legal, and ethical challenges before they can get there.

(790 words)

 

Bibliograpy

[1] Voelcker, John. “Autonomous Vehicles Complete DARPA Urban Challenge.” IEEE Spectrum: Technology, Engineering, and Science News. November 01, 2007. Accessed November 12, 2018. https://spectrum.ieee.org/transportation/advanced-cars/autonomous-vehicles-complete-darpa-urban-challenge.

[2] “Taking The Wheel: Autonomous Vehicle Tech Grabs Majority Of Auto Tech Deals, Dollars.” CB Insights Research. September 17, 2018. Accessed November 12, 2018. https://www.cbinsights.com/research/auto-tech-startup-investment-trends/.

[3] Harris, Mark. “Google Has Spent Over $1.1 Billion on Self-Driving Tech.” IEEE Spectrum: Technology, Engineering, and Science News. September 15, 2017. Accessed November 12, 2018. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-has-spent-over-11-billion-on-selfdriving-tech.

[4] Barabás et al 2017 IOP Conf. Ser.: Mater. Sci. Eng. 252 012096. Accessed November 12, 2018. http://iopscience.iop.org/article/10.1088/1757-899X/252/1/012096/pdf

[5] Harris, Mark. “Google Has Spent Over $1.1 Billion on Self-Driving Tech.” IEEE Spectrum: Technology, Engineering, and Science News. September 15, 2017. Accessed November 12, 2018. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-has-spent-over-11-billion-on-selfdriving-tech.

[6] Hardesty, Larry. “Explained: Neural Networks.” MIT News. April 14, 2017. Accessed November 12, 2018. http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.

[7] Hawkins, Andrew J. “Inside the Lab Where Waymo Is Building the Brains for Its Driverless Cars.” The Verge. May 09, 2018. Accessed November 12, 2018. https://www.theverge.com/2018/5/9/17307156/google-waymo-driverless-cars-deep-learning-neural-net-interview.

[8] Madrigal, Alexis C. “Inside Waymo’s Secret World for Training Self-Driving Cars.” The Atlantic. August 23, 2017. Accessed November 12, 2018. https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/.

[9] Krafcik, John. “Michigan Is Waymo’s Winter Wonderland – Waymo – Medium.” Medium. October 26, 2017. Accessed November 12, 2018. https://medium.com/waymo/michigan-is-waymos-winter-wonderland-9b3cffbb9bab.

[10] Team, Waymo. “Google I/O Recap: Turning Self-driving Cars from Science Fiction into Reality with the Help of AI.” Medium. May 08, 2018. Accessed November 12, 2018. https://medium.com/waymo/google-i-o-recap-turning-self-driving-cars-from-science-fiction-into-reality-with-the-help-of-ai-89dded40c63

[11] Azulay, Aharon & Weiss, Yair. “Why do deep convolutional networks generalize so poorly to small image transformations?” Arxiv. May, 2018. Accessed November 12, 2018. https://arxiv.org/pdf/1805.12177.pdf

[12] Brandom, Russell. “Self-driving Cars Are Headed toward an AI Roadblock.” The Verge. July 03, 2018. Accessed November 12, 2018. https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber.

[13] Madrigal, Alexis C. “Inside Waymo’s Secret World for Training Self-Driving Cars.” The Atlantic. August 23, 2017. Accessed November 12, 2018. https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/.

[14] Argo. “A Decade after DARPA: Our View on the State of the Art in Self-Driving Cars.” Medium. October 16, 2017. Accessed November 12, 2018. https://medium.com/self-driven/a-decade-after-darpa-our-view-on-the-state-of-the-art-in-self-driving-cars-3e8698e6afe8.

[15] Markoff, John. “Should Your Driverless Car Hit a Pedestrian to Save Your Life?” The New York Times. December 21, 2017. Accessed November 12, 2018.

Previous:

Open Innovation in CPG: The Future of HENRi@Nestlé

Next:

Blackrock’s Use of Machine Learning to Deliver Performance

Student comments on The Limits of Autonomy

  1. Well written. I had not previously thought about or heard about the challenge that the self-driving cars currently on the road are not “practicing how they will play” I.e. a future with more self-driving cars on the road. Hopefully we will see more advanced systems communication between driverless vehicles to improve coordination.

    In regards to the 5 bystanders vs 1 passenger, I hope we train cars to make utilitarian decisions – always minimizing total harm. Unfortunately this may lose out to emotional arguments. If it was my child in the car, I would want the car to save the child, and given the choice, would probably program the car to do so. Do we give owners this option?

    Given the injuries and deaths that happen on motorways, I certainly hope government will act in the best interest of society and not enact barriers to this innovation. The state by state fragmented system of regulations regarding autonomous cars is a bit of a concern and there is room for the federal government to step in and provide nationwide guidelines. Otherwise there could be a scenario where a human driver would have to take over when crossing state lines because a system hasn’t passed a certain state’s regulation. I would also like to see a cap on liability claims if there are injuries or deaths while this tech is being developed. It won’t take too many major lawsuits for some companies to give up which could delay or prevent this promising technology from becoming a reality.

  2. I am very excited to see how Waymo dominates the future (or not?) after reading this essay. Since many autonomous car startups have been coming up, I have always been interested to understand how Waymo differentiates itself from its competitors, and this essay has thrown some light on how they have been working on developing their models to achieve level 5 autonomy. With improved sensor systems and enhanced V2X (vehicle-to-everything communications, that Team Mongeese above me alluded to) picking up speed, Waymo is likely to get closer to their vision, particularly given the large amount of data they have been collecting.

    The ethics question you raise (the ‘Trolley Problem’) has invited opinions from experts in all fields, and is a real problem we struggle with even today. This paper has an interesting take on the problem, and has been discusses in academia for a few decades now – Thomson, Judith Jarvis. “The trolley problem.” Yale LJ 94 (1984): 1395. I believe that such ethical conundrums should be solved by Way in conjunction with governments and people across the world, once the technology is mature and ready to be accepted by the masses.

    The role of government is very interesting and widely discussed as well. Most autonomous driving companies believe that they will have to liaise with each other and the government to establish a set of protocols and regulations soon, and that the onus is on them to drive regulation, as governments lack the mechanisms to test and regulate this technology today. The process has already begun, with lobbyists from Waymo, Uber and other companies coming together to persuade local governments to partner with them in on-road trials.

Leave a comment