FYI.

This story is over 5 years old.

AI

'A Robot Killed a Man': A New Doc Looks at the Terrifying Future of Automation

HBO's 'The Truth About Killer Robots' examines the legal, economic, psychological, and moral implications of our inevitable, AI-run dystopia.
A robot face is taken off a shelf
Photo courtesy of HBO

In 2015, a worker at a Volkswagen factory in Germany was grabbed and crushed to death by a stationary robot. In 2016, an Ohio man died when the self-driving Tesla he was in crashed into a tractor-trailer while he watched Harry Potter. That same year the Dallas Police Department used a robot to ambush an active shooter and blow him up. In 2018, a pedestrian in Arizona was killed by an automated Uber. As Artificial Intelligence and robots become more commonplace in our rapidly progressing technological world the question of who’s to blame when a robot kills a human has entered the national dialogue.

Advertisement

With legal, economic, psychological, and moral implications being raised, filmmaker Maxim Pozdorovkin (Pussy Riot: A Punk Prayer), explores the issues in his new documentary film, The Truth About Killer Robots, out this week on HBO. VICE talked with Pozdorovkin to find out why he made a film about killer robots, and how close we are to a Westworld- or Matrix-like existence. Here’s what he had to say.

VICE: Why did you make a film about killer robots, how long did it take, and when did you first get the idea?
Maxim Pozdorovkin: I wanted to make a film about automation, and sort of a transformative effect on the robo-economy for a long time. I was interested in kind of using science fiction tropes. When there was this unfortunate accident at a Volkswagen plant, where a worker was killed by a manipulator arm, I went there, and most of the workers were forbidden from talking about the accident. They were all very glad to talk about the way that the robots have transformed their work environment and their lives. I had this idea of looking at several cases where automation was sort of a literal cause of death—a way of considering automation as a certain kind of metaphorical death, a kind of dehumanizing mechanization that our society has been subject to for a long time. To think about AI not as something that's distant and in the future, but something that's part of a historical trajectory, that starts way back in the 1920s [and continuing with the] automation of car manufacturing in the ‘70s. Those are the kind of main ideas from the outset. The film took about three and a half years to make.

Advertisement

How important was Isaac Asimov’s First Law of Robotics to the central theme of the film?
I thought about the way science fiction writers work. They look at the world around them and guess [what] will be ubiquitous and create a world out of these predictions. I didn't want to make a film about science or about technology with talking heads explaining the technology. I wanted to make a film that's not about what robots do for us, but what they do to us. How they transform us. The science fiction approach was to film in the world's four biggest economies and sort of deduce and create this world from what we see all around us. These trends are marginal now, but will clearly be dominant in the future. That was kind of the operating principle of the movie. It was sort of the idea behind how science fiction is created and written. That's why we think of the film as a certain kind of science nonfiction.

How early or late in the process did you come up with the idea for the robot narrator, and when you first came up with that idea, did you have any idea of the impact it would have on the film?
The Kodomoroid android was a robot designed specifically to read the news, as a kind of gambit by Hiroshi Ishiguro, to show that there are all these jobs that will clearly be handed over to robots. I wanted to engage with the technology, engage with the fact that the arts, music, photography, all these industries, are also being hollowed out by automation and so I wanted that to be part of a process. It was very important that the film grapples with the thing that it's talking about, rather than elevating our own kind of uniqueness as humans, as a certain kind of counterbalance to it.

Advertisement

I was inspired by a Peter Watkins film called The War Game, which is narrated in this kind of future conditional tense. [The idea] was there from the very beginning. I wanted to unite the technology, because all the news pieces, all the books, all the articles that I read [were] almost all exclusively in the voices of the engineers, the CEOs, the programmers, and the people who are the direct beneficiaries of the technology. The way that automation [and] mechanization is transforming society and threatening society is already kind of self-evident. My idea was to grapple with more of this as a way of thinking towards AI.

What is the world coming to if we have to consider a robot's guilt or lack there of and are robot's even capable of taking responsibility?
I think that question points to something that's very important, that gets mentioned in the film. Technological advancement will always outstrip the pace at which laws change. We're always going to be playing catch up, and we're always going to be behind. A lot of these incidents where automations or robots cause the death of a human are problematic and fascinating precisely because they show us the ways in which our current legal systems, economic systems, moral systems, and imaginative systems fail in trying to understand this.

Using a robot in a military situation diffuses culpability to such an extent that even our idea of blame is very difficult to assign to any one place. That creates a certain kind of tolerance, that no one can really be made responsible, and that's deeply problematic and at odds with the way that accountability works. Or the way that the justice system works. Those are the kinds of nuances that we wanted to explore in the film.

Advertisement
A robot hand

Photo courtesy of HBO

How close are we to a world of Westworld or Terminator-like killer robots?
In the film we consider this incident where a bomb robot was used to kill an active shooter in Dallas. That story is told by the sniper who, had the robot not been available, would have probably killed the active shooter. It's an awful incident, but [the shooter] had already killed several cops. As police officers are put in these extreme situations and as the technology gets smarter, they will not hesitate to use it. [But] there’s something uncanny about technology having this power. And when we tend to posit the danger in this familiar trope of science fiction, we're essentially looking at this idea that we have—that there will be a RoboCop going around killing people. And because that's the only thing we're focusing on, we miss the point, and we miss all these other ways in which automation is having negative effects on us.

Why do think people put so much trust in the technology?
The great sort of delusion of our time is this technological optimism, where we've lost so much faith in the ability of government to change anything, or to really be instrumental in bringing about change, that the only thing that we can think of is that, of course, technology's going to solve it. There’s this kind of deep-seated belief amongst a lot of people—and really smart people, too—that technology will bring us to the end of global warming and we'll be able to figure out terrorism and structural inequality. That's the domineering narrative and I think that we're incredibly accommodating to this technology.

Advertisement

We give over our data, or do all these things, [and] it’s a massive wave that we're embracing, largely as a result of that kind of technological optimism. It's created by the writing and the reporting on the subject, which tends to be in terms of what can robots do for you. How can they help you rather than considering what the integration of these kinds of entities do to us?

How do you think the death of a pedestrian in Arizona by an automated Uber is affecting the company's goal to do away with drivers?
They'll hold off the testing for some time, and there'll be some negative association, but I think that the economic momentum and incentives are too strong. The rise of Uber and Lyft—ride share services—[has] fundamentally destroyed the traditional taxi industry. In the film we have this moment when testing resumes, and a wave of suicides of taxi drivers follow. Even since we've made the film, there've been several suicides by taxi drivers that specifically cited the disruptive effect on Uber and Lyft on their livelihood as the cause.

With the robots and computers replacing humans and taking over everything in a way, how close do you think we are to a kind of Matrix-like reality?
I think that's definitely in the near future. I think we're approaching a time where large sectors of the population will not be needed to work in producing goods. And most of the production will be done automatically. And that will bring with it all sorts of existential and economic crises. I think to offset a lot of that reality, there will be all sorts of virtual and real distractions and opioids, etc., that will kind of try to fill in the void. What wins, in the end, is hard for me to say.

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.

Follow Seth Ferranti on Twitter.