Through Flaws in the Machine, Robots May Develop "Souls": An Interview with John Gray

We spoke to John Gray a few days before his latest book, <i>The Soul of the Marionette</i>, is set to drop in the United States.

|
May 16 2015, 4:44am

Photo via Flickr user Tom Simpson

It wasn't until after I interviewed John Gray, major British philosopher, public intellectual, and the author, most recently, of The Soul of the Marionette, that I realized he was—in the words of a British friend—"a total hero." Gray, who recently retired from a storied professorship at the London School of Economics, was not only blazingly smart, with a cracking wit; he also came across as down-to-earth, considerate, and rather even-keeled. Considering that his book makes a fairly damning case against the techno-utopian logic of Silicon Valley and cuts straight into our "self-flattering" ideas of freedom, Gray's moderate tone was a surprise. Our conversation ranged from ancient Greek warfare to cryogenically-frozen tech tycoons, from the state of the humanities to the works of Philip K. Dick, from robotic souls to the UK's astonishing general election results earlier this month. Much like Gray's book, our 75-minute chat flew by and left me electrified.

The Soul of the Marionette offers a mini-education over the span of 20 short chapters, which romp through major and minor works of philosophy, art, history, and science fiction. The book can be disorienting—each of the chapters can be read on its own, Gray notes—but it's never dull. Gray likens the style of this book to Pascal's Penseés. ("Though, of course, I'm no Pascal!" Gray laughed, perhaps underselling himself.)

Gray's ideal reader, in his words, is "a person who is curious, who thinks that there might be something wrong with our modern world, the world in which we expect human progress from science and technology." If that sounds like you, check out The Soul of the Marionette when it's released on May 19th in the US from Farrar, Straus, and Giroux.

VICE: The Soul of the Marionette addresses the fundamental question of whether or not human beings have freedom. You seem to say that we don't.
John Gray: I guess a different way of posing the question that the book asks is, "What kind of freedom do we think we want, and do we really want it?" The book is not really addressed to traditional philosophical issues of free will and metaphysics. We all think we want to be free. We all feel frustrated and thwarted and powerless when we think we're not free. But what is it that we want from freedom? Do we really want what we think we want?

Your book also discusses how torture and "hyper-modern techniques of control" are being used today, in the name of human rights and freedom. Do you see this situation improving, or worsening, over the coming decades?
All of these technologies, they're ambiguous. What they humanly mean, their human values, is always ambiguous. I'm old enough to remember when photocopiers and video machines were thought as bound to bring down tyrannies, back in the 70s and 80s. People said things like, "Well, if massacres can be videoed, no country would dare to commit a massacre!" It happens every day now. It happened with Tiananmen Square. They possibly even use that movie to show other people, in other parts of China, what might happen to them if they rebel.

Something I wrote shortly after 9/11 was Al Quaeda and What it Means to be Modern. I said that if technology develops the way that even then it seemed it would, over ten years ago, then one of the consequences of these new technologies, which are liberating in many ways—I use them myself all the time, for research, to buy books, to plan holidays—one of the consequences will be the extinction of privacy. What's happening now, it's not only snooping and the NSA. Anything that we do nowadays leaves an electronic trace. Practically anything—any phone call, any purchase—is recorded, and that can be, in one way or another, accessed. So as I say in [The Soul of the Marionette], rather than our situation being that of Andy Warhol's world where "Everyone can now look forward to 15 minutes of fame," the situation that we find ourselves in now is that no one can ever expect 15 minutes of anonymity.

If you were anonymous for 15 minutes, you'd probably become the most hunted person in the world.

Through flaws in the machine, [robots] might begin to develop what we have, what in traditional parlance we call a "soul."

Isn't this what so many people are protesting against, when they get angry about the NSA?
Another reason why I'm skeptical that anything will change or improve in this respect is that living in the virtual world created by new technology—living online—is, for many people, now taken for granted. It's part of the way they live and they don't want to really give it up! There are so many ways in which the virtual world created by the media around us entertains and distracts us that no one really wants to give it up. The question is not how can you get out of the bubble, the question is, Do we all want to get out of the bubble? Do we really want to be out of the bubble, do we really want to give up this kind of world?

The idea that human beings want [freedom] more than anything else is just a self-flattering myth. All human beings want many things more than they want freedom. So when people say, "I don't want to be spied on, I don't want to lose privacy, I don't want to do this or that," I can't take them all that seriously. What might happen is there may be a small elite that can encrypt parts of their lives. Privacy may become a luxury good, just like living a long time.

It's sort of terrifying to think of a world where longevity and privacy are luxuries.
Oh, ultra-luxuries. I read somewhere that the very worst thing that can happen to you now is to be famous and not rich. And the reason is that if you're famous and rich, you can surround yourself with lawyers and bodyguards and cocoons of protection. You can control to some extent the nature of your fame, of your celebrity. You can reconcile your celebrity with privacy, to a degree. But if by some terrible mischance you stumble into being world famous while being poor, then you have no way of defending yourself. Your whole life becomes public with no means of control.

On the other hand, the whole industry of social media depends on people not wanting to have privacy. If they report [on social media], "I am having a cup of coffee. I have just been reading X," they're opting to share their experience. They're choosing that, you could say. The trouble is that that's not the only way these media work. If someone reveals a thought on Twitter, and that thought is either misunderstood, or understood and considered to be offensive, then all hell will fall on them. So the idea that these media can be controlled by the people who use them doesn't seem to be true, does it? And yet countless people do use them! And this has created some of the wealthiest companies in the world.

John Gray. Photo via Flickr user Robert Burdock

I hear your critique of opting into social media, but for my generation the choice to use Twitter and Facebook is less of a choice than an act of necessity—to remain relevant, to remain competitive in the job market.
What you say is absolutely right. Just as people are spending enormous amounts of money on cosmetics and surgery to remain in their current professions; they would say, "What choice do I have? If I don't do it, then I'd be out of a job." So there's that. But on the other hand, is it true that full participation is really necessary? How does that meld over into selfies on Instagram and posting things about personal relationships?

Social media has utterly transformed our lives. I don't think these changes will reverse or stop. I think technology is moving really very quickly. Jobs that require a lot of human judgment, like medical diagnoses, a lot of those will be done by machines. In the future, machines that diagnose more efficiently and successfully than human beings will be very common. You've got a kind of steep change, then. You say to the majority of people, "You need more job qualifications," but you know from your own generation that going back to school is not always a wise investment! If you look ahead ten, 20 years, there will be whole vocations and careers that will be depleted and reduced by technology.

Don't think that you can become free or the master of your life through knowledge!

But isn't the argument that these technologies will create new jobs, as well?
Of course, if you talk to people who believe in technology and science and human progress, they will say that in the past what has always happened is that new jobs and new activities are created by these technologies, and you don't end up with everybody unemployed. They say new things will come into being. They just think that the mechanisms will kick in, as they did in the past.

But is this current revolution the same, or is it different? Is it going to be more profound? There's this contradiction between what these technology prophets say. They say, "Nothing like this has ever happened before!" and they say "Well, if you look at the past, things always work themselves out." If it's really something that has never happened before, then it might leave the majority of the human species behind, which I suggest as a kind of danger in the book. And if, on the other hand, it isn't all that unique, why do they expect it to change everything?

You write that "in the longer run, the only rational course of action will be to reconstruct the humans that remain so that they more closely resemble machines." Do you see evidence around us now, with Apple Watches and other technologies attached to human bodies?
They're trying! Some people want to remodel themselves! When I wrote that sentence, it was partly ironic: If you assume [Silicon Valley's] way of thinking, it'd be a hell of a lot easier if humans were machines. In the book I suggest that we are machines, but fortunately we're flawed machines, faulty machines. People ask the question, "Can the Blade Runner world really come about if we invent it through artificial intelligence or new types of bioengineering? If we invented machines that didn't have conscious awareness, could it somehow develop over time so that they'd become like us?" I don't see why not. If you're religious and you think that God puts souls into people and not in machines, of course not. But if you're not religious—as I quote the Italian poet Giacomo Leopardi in the book—we're beasts and we're machines, and we think!

So if you think of human beings as being brought about by a semi-random process, you can imagine a situation in which we human beings invent robots and artificial intelligence, and in the beginning they may not have conscious awareness. But they might go beyond that. Through flaws in the machine, they might begin to develop what we have, what in traditional parlance we call a "soul." That means the process will never be controlled. That's good, from my point of view. In the future there might well be entirely new species of humanoids who are like us, who may live longer or be stronger or think more quickly. But because they will be flawed—flaws will creep in—they will develop what we have: the sensation of choice and the consciousness of ourselves.

These Silicon Valley types, they're not going to be able to achieve what they want. If new technologies come up, it's not going to be the case that robots will take over and we'll have a world with no freedom or consciousness. These robots will develop flaws, and these flaws will be the saving grace, will bring self-awareness and the imperfections which I value, which I cherish, which other people want to get rid of. They won't be gotten rid of. They'll repeat themselves—either in their own lives, or in the lives of these new creatures that they're trying to create.

Another argument of your book is that science can never explain itself, and we will have to rely on philosophy and religion and humanistic endeavors to actually explain what science discovers. What will happen if the current trend at universities continues, of prioritizing those disciplines with concrete economic gains above those of humanistic inquiry?
People feel, for good reason, that when they're at university, if they spend their time on humanistic disciplines, they'll have no obvious, clear economic payoff. They feel they'll "pay for it" later. And that feeling was well-founded, was it not?

It could be that when people are in the privacy of the poling booth, what is revealed to them is something they've concealed from themselves: what they really value.

Certainly, in terms of starting salaries.
Certainly. We are moving to a situation in which the pressure to acquire definite skills in numeracy and scientific knowledge will override and exclude any interest in the humanities. But if the position that we are talking about earlier begins to develop, in which scientific and technological change occur [exponentially rapidly], then change in vocation and activities will be very profound and quick. It may very easily be that someone equipped with the humanities will be able to survive, and even to thrive, better over a lifetime than those who have a narrower range of skill.

Because one of the ideas that is redundant now is the traditional idea of a career as being a lifetime form of job commitment that is renewed through various phases. There was once an idea that there were fixed structures and fixed activities. Now the nature of change is so profound that there are huge swathes of vocations and professions that are vanishing and melting away.

But the human world isn't changing. We remain human beings with similar passions and hopes and impulses to those that are in Shakespeare, and the Bible, and ancient Chinese and African poetry. Some of the younger people that I have met, who have also had an education that has included history and ancient languages, they can jump more nimbly from one type of activity to another. They can see when certain types of industry are in decline. It may be that the skills of nimbleness, of judgment and resilience, of being able to see what changes will happen: that could actually help them.

By the time [other students] train up to a position, they'll turn to pursue an activity and all knowledge of it will be gone. It's a treadmill that many people are on. Of course, it's very difficult for any generation to step off. If you step off, you can step into a situation where you simply can't make a living and you're marginalized.

An education that is subordinated entirely to economic imperative is very narrowing. It might not even be that economically productive. You might be better off with that knowledge of how the ancient Greeks fought wars. If you read the ancient histories, the way they describe revolutions and tyrannies—you just change a few names and you see it's happening in Britain and America today. That's instructive. That's useful. One of the great benefits of the humanities is that they enable you to be more savvy in not swallowing current fads and trends as being final and categorical. It's happened so many times before. If you base your career life on something that is the "only way things can go," you might very well find five years down the line that everything is different.

We had a general election in my country, recently—

RELATED: Check out VICE UK's coverage of the General Election.

Yes, you did!
And that election was categorically different from what had been predicted for weeks or months, down to the very last minute. Every single poll except the one at the very end, the exit poll—they were all wrong. Interesting, fascinating. What was it that was missed? I'm sure it's not a technical flaw: Ten different polling organizations [were wrong], and they didn't share [data]. It could have been something as simple as: The people they talked to lied.

Or it could be something more subtle. It could be that when people are in the privacy of the poling booth, what is revealed to them is not something they concealed from others but something they've concealed to themselves: what they really value. This experience has revealed the enormous limitations of quantitative knowledge. None of the people I admire got it right. They were all predicting that no one would win. They were all wrong. I think that's profoundly interesting. It illustrates that we know much less about society and ourselves than we think we do. The quantitative precision of rigorous scientific techniques, when applied in the human world, can often give results that are misleading. This is a very important event, but think of even bigger events, like a war, being started on the basis of these techniques—it has been done before, with disastrous results.

Don't think that you can become free or the master of your life through knowledge! Knowledge is a slippery thing. Life is a slippery thing! Fortunately. Happily.

Follow Jennifer Schaffer on Twitter.

More VICE
Vice Channels