Artificial intelligence is the driving force behind the next wave of computing innovation. It’s powering the Big Tech race to build the best smart assistant: Apple’s Siri, Amazon’s Alexa, IBM’s Watson, Google’s Assistant, Windows’ Cortana, Facebook’s M, to name a few. It’s pivotal to the United States’ defense strategy; the Pentagon has pledged $18Bn to fund development of autonomous weapons over the next three years . And it’s spurring competition in the automobile industry, as AI will (literally) drive autonomous vehicles.
AI has huge potential benefits for society. But AI needs to be trained by humans, and that comes with immense risk. Take Microsoft’s experiment with a chatbot it called Tay, an AI that spoke like a millennial and would learn from its interactions with humans on Twitter. “The more you talk the smarter Tay gets” . Within 24 hours humans manipulated Tay into becoming a racist, homophobic, offensive chatbot.
Exhibit 1 – How Tay’s Tweets evolved through interaction with humans 
More urgently, the challenge of teaching AI sound judgment extends to autonomous vehicles. Specifically, the ethical decisions that we’ll have to program into the robots’ algorithms. Given its stewardship in the self-driving car space, Google will play a major role in shaping how AIs drive. Google’s business model is to design a car that can transport people safely at the push of a button. This would deliver undoubtable value creation, as over 1.2 million people die worldwide from vehicular accidents, and in the US 94% of these are caused by human error . However, the operating model through which Google executes on this presents difficult moral dilemmas.
Google will have to take a stance on how the car should make decisions regarding loss of human life. The big question is: how should we program the cars to behave when faced with an unavoidable accident? In the situation that a person or people have to die, how do we train the autonomous vehicle to make that decision of who to kill? As stated in the MIT Technology Review “Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?…Who would buy a car programmed to sacrifice the owner?“ 
Consider the “Trolley Problem, a thought exercise in ethics, in Exhibit 2. Do you favor intervention versus nonintervention? Consider MIT’s Moral Machine Project, a website that presents moral dilemmas (example in Exhibit 3) involving driverless cars and forces you to pick your perceived lesser of two evils. In browsing through the scenarios on http://moralmachine.mit.edu/, I personally find that there’s no clear answer. It’s hugely uncomfortable to decide which parties should be killed.
Exhibit 2: The Trolley Problem: A runaway trolley is speeding down a track towards 5 people. If you pull the lever you can divert the trolley to another track where it would only kill 1 person. Do you pull the lever? 
Exhibit 3: MIT Moral Machine: Do nothing and kill the pedestrians who are violating the crosswalk signal — 1 grandma and 4 children? Or swerve and kill the car’s passengers — 4 adult kidnappers and the child they’re holding hostage? 
In a study on how people view these dilemmas, a group of computer science and psychology researchers discovered a consistent paradox: most people want to live in a world of utilitarian autonomous vehicles — vehicles that will minimize casualties, even if that means sacrificing their passengers for the greater good. However, respondents themselves would prefer to sit in an autonomous vehicle that protects their own life as a passenger at all costs . It’s clear that there’s no objective algorithm to decide who should die. So it’s on us humans, the programmers at Google and other automakers, to design the system ourselves.
In addition to algorithmic design, we’ll have to redefine vehicular laws. Currently our adjudication relies on the “reasonable person” standard of driver negligence. But when an AI sits behind the steering wheel, does the “reasonable person” standard still apply? Are the programmers who designed the accident algorithm now liable?
There’s no question behind the value of Google’s business model. The real question is how Google will operationalize it. Last month Microsoft, Google, Amazon, IBM and Facebook announced the Partnership on Artificial Intelligence to Benefit People and Society (PAIBPS) to support research and standard-setting . But these companies are all competing in the same race to be the leader in AI. Can we trust that they’ll take the time to carefully think through the ethical dilemmas rather than accelerate to win the race? To allow autonomous vehicles to save lives Silicon Valley will have to wrangle with the ethical dilemma of how cars take lives.
(Word Count: 792)
- http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/; http://science.sciencemag.org/content/352/6293/1573