Self-Driving Cars Will Get Ethics Lessons in Virtual Reality

FYI.

This story is over 5 years old.

Tech

Self-Driving Cars Will Get Ethics Lessons in Virtual Reality

Immersive digital environments can test our response to life-or-death scenarios.

As we enter the era of self-driving vehicles, one vital but slightly morbid area of research concerns the ethical codes these vehicles should follow when making the kind of life-or-death choices every driver dreads: whether to steer off the road at speed rather than hit a child, for example, or make a deliberate crash into one vehicle to avoid causing a pile up at a junction.

Instinctively, most of us will have strong and complex feelings about this—one of the best known studies, conducted at MIT in 2015, found that people wanted self-driving cars to make utilitarian decisions that would minimize death toll even if it sacrificed the car's occupants, so long as they weren't personally occupying the car. The difficulty is incorporating these preferences into algorithms that will be both acceptable to vehicle consumers and compatible with the highest levels of public safety.

Advertisement

Now, a team of German researchers have put test subjects into virtual reality and had them make split-second decisions between crashing into adults, children, animals, and inanimate objects to see if understanding the way we make tough decisions in a simulated environment will help build a model of human behavior in the real world.

In the study, published last week in Frontiers in Behavioural Neuroscience, volunteers put on an Oculus Rift headset that simulated driving a vehicle down a suburban street. In front of the car, a pair of obstacles—sometimes people or animals, sometimes inanimate objects—would appear in two lanes, and the subject had to steer the vehicle into one of them, sacrificing it to preserve the other. (Human characters were mostly static to avoid suggesting they would move out of the way.) After multiple test runs with more than a hundred subjects, the researchers put together a hierarchy of how things were valued in relation to one another. The results were mostly unsurprising: Drivers would save a human rather than an animal, and run over an adult rather than a child.

An illustration of the trolley problem, where the subject must decide whether to pull a lever and cause fewer deaths while taking responsibility for one. Image: McGeddon/Wikimedia Commons

But according to the study's authors, the specific results are less important than assessing the general suitability of VR for testing ethical scenarios. Classical studies of ethics usually rely on descriptions of abstract situations—the "trolley problem" being one of the most iconic (and now memeworthy)—but the authors' claim is that immersive digital environments can give more realistic results by presenting situations in a more visceral way.

Advertisement

"I think virtual reality is a breakthrough for empirical ethics, because without this there really is no way to reproduce in a controlled setting an experiment which really touches upon matters of life and death," said Leon Sütfeld, PhD candidate in cognitive science at Osnabrück University and lead author of the study, in a call with Motherboard. "Studies show that there are vast differences between abstract situations and behavior in more realistic scenarios, so I think that VR will be a very useful, broadly used tool in the future."

Sütfeld says that the findings point to the fact that a simple "value-of-life" model, in which different classes of people or objects are ranked higher or lower in a potential crash situation, is a good approximation to the way humans make decisions, while also being easy to convey to the public.

The question of whether we should choose easily explainable solutions over a minimization-of-death solution when designing automation is open to debate, as is the overall validity of using virtual reality to approximate real life, especially when other factors like the human instinct for self-preservation have been left out.

But when algorithms do start to make decisions in life-or-death situations, designing for transparency and accountability is an important value in itself.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.