Brett M. Frischmann is a Charles Widger Endowed University professor in Law, Business and Economics at Villanova University, and Evan Selinger is a professor of philosophy at Rochester Institute of Technology. They are co-authors of Re-Engineering Humanity,Cambridge University Press: forthcoming in April 2018.Self-driving cars are here. More are on their way. Major automakers and Silicon Valley giants are clamoring to develop and release fully autonomous cars to safely and efficiently chauffeur us. Some models won’t even include a steering wheel. Along with many challenges, technical and otherwise, there is one fundamental political question that is too easily brushed aside: Who decides on how transportation algorithms will make decisions about life, death and everything in between?
The recent fatality involving a self-driving Uber vehicle won’t be the last incident where human life is lost. Indeed, no matter how many lives self-driving cars save, accidents still will happen.Imagine you’re in a self-driving car going down a road when, suddenly, the large propane tanks hauled by the truck in front of you fall out and fly in your direction. A split-second decision needs to be made, and you can't think through the outcomes and tradeoffs for every possible response. Fortunately, the smart system driving your car can run through tons of scenarios at lightning fast speed. How, then, should it determine moral priority?Consider the following possibilities:
- Your car should stay in its lane and absorbs the damage, thereby making it likely that you’ll die.
- Your car should save your life by swerving into the left lane and hitting the car there, sending the passengers to their deaths—passengers known, according to their big data profiles, to have several small children.
- Your car should save your life by swerving into the right lane and hit the car there, sending the lone passenger to her death—a passenger known, according to her big data profile, to be a scientist who is coming close to finding a cure for cancer.
- Your car should save the lives worth the most, measured according to amount of money paid into a new form of life assurance insurance. Assume that each person in a vehicle could purchase insurance against these types of rare but inevitable accidents, and then, smart cars would prioritize based on their ability and willingness to pay.
- Your car should save your life and embrace a neutrality principle in deciding among the means for doing so, perhaps by flipping a simulated coin and swerving to the right if heads comes up and swerving to the left if its tails.
- Your car shouldn’t prioritize your life and should embrace a neutrality principle by randomly choosing among the three options.
- Your car should execute whatever option most closely matches your personal value system and the moral choices you would have made if you were capable of doing so. Assume that when you first purchased your car, you took a self-driving car morality test consisting of a battery of scenarios like this one and that the results “programmed” your vehicle.
There’s no value-free way to determine what the autonomous car should do. The choice presented by options 1–7 shouldn’t be seen as a computational problem that can be “solved” by big data, sophisticated algorithms, machine learning, or any form of artificial intelligence. These tools can help evaluate and execute options, but ultimately, someone—some human beings—must choose and have their values baked into the software.Read more: Tempe Police Release Footage of Fatal Uber Self-Driving Car AccidentWho should get decision-making power? Should it be politicians? The market? Insurance companies? Automotive executives? Technologists? Should consumers be allowed to customize the moral dashboard of their cars so that their vehicles execute moral decisions that are in line with their own preferences?Don’t be fooled when people talk about AI as if it alleviates the need for human beings to make these moral decisions, as if AI necessarily will take care of everything for us. Sure, AI can be designed to make emergent, non-transparent, and even inexplicable decisions. But since the shift from human drivers to passive passengers in self-driving cars shifts decision-making from drivers to designers and programmers, governance remains essential. It’s only a question of which form of governance gets adopted.
The critical social policy questions need to be addressed proactively while systems are being designed, built, and tested
The scenario we’ve described is based on an old philosophical thought experiment called the trolley problem. In the original experiment, a person is faced with the decision about pulling a level to divert a trolley from one track to another and in doing so, save five lives but take another. MIT developed a modern interactive version called the Moral Machine.It’s not surprising that the trolley problem comes up in virtually every discussion of autonomous vehicles. To date, the debate has primarily focused on death-dealing accidents and raised important questions about who gets to decide who lives and dies. Some insist that the question of who decides must be resolved before autonomous cars are given free rein on the roads. Others argue that such decisions concern edge cases and should be deferred to the future so that innovation won't be stalled. And some deny that the trolley problem scenarios are even relevant, once super smart braking systems are built into each car.The critical social policy questions need to be addressed proactively while systems are being designed, built, and tested. Otherwise, values become entrenched as they’re embedded in the technology. That may be the aim of denialists pining for perfectly safe systems (unless they’re truly deluded by techno-utopian dreams). The edge case argument is more reasonable if you focus exclusively on the trolley problem dilemma. But the trolley problem captures one small albeit important piece of the puzzle. To see why, we need to consider scenarios that don’t involve life-or-death decisions.
Let’s focus on accidents. Self-driving cars will reduce the number of accidents, but again, do not be fooled by the siren’s call of perfection. There still will be accidents that cause:
- considerable bodily loss, such as the loss of limbs, but not death;
- considerable bodily damage that disables the injured person for 24 months;
- considerable mental damage that limits the injured person's ability to ride in an automobile and forces the person to use less efficient modes of transportation;
- considerable damage to the person's vehicle; or
- damage and delays.