In the event that a machine screws up somebody's life, experts believe that the new law also opens up the possibility to demand answers. Although a "right to explanation" for algorithmic decisions is not explicit in the law, some academics believe that it would still create one for people who suffer because of something a computer did.This proposed "right," although noble, would be impossible to enforce. It illustrates a paradox about where we're at with the most powerful form of AI around—deep learning.We'll get into this in more detail later, but in broad strokes, deep learning systems are "layers" of digital neurons that each run their own computations on input data and rearrange themselves. Basically, they "teach" themselves what to pay attention to in a massive stream of information.
"As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does"
Consider the Turing machine, a concept of an ideal computer popularized by Alan Turing in the 1930s that is still used today to think through some of the thornier issues surrounding the ethics of intelligent machines. But in Turing's mind, "a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine."In essence, then, extremely powerful deep learning computers are still only computers, but they're also savants in a way—data goes in, and outputs come out. We understand the math and the high-level concepts of what makes them tick, but they have an internal logic that's outstripped our ability to comprehend it.
"You don't understand, in fine detail, the person in front of you, but you trust them"
"I can look at the code of the individual neuron, but I don't know what that symphony is going to sound like—what the music will sound like," Clune continued. "I think our future will involve trusting machine learning systems that work very well, but for reasons that we don't fully understand, or even partially understand.None of this is to suggest that researchers aren't trying to understand neural networks. Clune, for his part, has developed visualization tools that show what each neuron in every layer of the network "sees" when given an input. He and his colleagues have also written an algorithm that generates images specifically designed to maximally activate individual neurons in an effort to determine what they are "looking" for in a stream of input data.Google engineers took exactly this kind of backwards approach to understanding what neural networks are actually doing when they built Deep Dream, a program that generated trippy images purely from a neural network's learned assumptions about the world. "One of the challenges of neural networks is understanding what exactly goes on at each layer," a Google blog explaining the approach stated.The end goal isn't so much to understand the mysterious brain of a super-intelligent being, as it is about making these programs work better at a very base level. Deep Dream itself revealed that computers can have some pretty messed up (not to mention incorrect) ideas about what everyday objects look like.
"I don't know what that symphony is going to sound like—what the music will sound like"