FYI.

This story is over 5 years old.

black mirror

How Far Off Are We from the Digital Clones of 'Black Mirror'?

The Charlie Brooker series has returned to one fear more than any other: that digital clones could be tortured for eternity.
Digital assistant in "White Christmas". Credit: Channel 4

Everyone's favorite dystopian tech-anxiety anthology series Black Mirror has returned with six new episodes. VICE is exploring some of the ideas raised in these episodes with the help of key figures in the show, as well the wider world of science and technology. Read the first one, about the "USS Callister" episode, here; and the second, about the 'Arkangel' episode, here.

It’s a feeling anyone who spends even a moderate amount of time online knows well. You are lost in a sea of tabs, links, posts and photos. You are drifting or even hurtling away from the real world and into the realm of the artificial, and sometimes you might feel as if you can’t get out of it, just click, click, click. When you get too deep, you start to feel like you have become someone else, that there is the you sitting in front of your computer, or with your phone in your hand, and another you, swimming ever deeper into the ocean of the internet.

Advertisement

But what if you really couldn’t get out of it? We all have a different persona online, but what if there was the you who existed in the real world, and a whole other you that existed in some virtual space, a different being that shared your DNA and your memories, but was nevertheless not you?

Over the course of four seasons, Charlie Brooker’s Black Mirror has returned to one fear more than any other: that digital copies of ourselves could be left for eternity in various states of happiness or distress. As a metaphor for the way we are now, or for the future of the world, it is often very powerful. Characters have been trapped in simulations, their digital existence normally a form of torture, but one that brings a "real" person some sense of comfort: a digital prison ("White Christmas"), a boyfriend brought back to life ("Be Right Back"), revenge on a co-worker ("USS Callister"), a dating simulator ("Hang the DJ") or a distraction from illness ("San Junipero").

The protagonist of "USS Callister" – the first episode of the new series – is Robert Daly, the chief technology officer of a tech-entertainment company. He’s the nerd whose brain powers the whole enterprise, but his slick business partner rips him off and the people who work for him think he’s weird.

Isolated and unhappy, he enacts his revenge by using his colleagues' DNA to create digital clones of them. The clones are forced to live out a purgatorial existence trapped in a virtual reality game that he controls. They are conscious and they have all their own memories from their life before, but they are unable to escape the game. They are being tortured.

Advertisement

"He couldn’t do it from your DNA. That was a bit of a cheat," says Robin Hanson, a research associate of the Future of Humanity Institute at the University of Oxford, and the author of The Age of Em. "They show you in the episode that these characters have all their memories up until the time soon before the guy took the DNA. But memories aren’t stored in the DNA. That’s what the brain is for."

For anything commensurate to what happens in "USS Callister", you would need to conduct a brain scan as well as having the DNA of the person you wanted to create a digital clone of.

"The brain scan would have the state of the brain – your personality, your memory, your skills," Hanson told me. "This is a long way off – I’d say about a century – but it’s foreseeable, clearly. The first brain scan would be a very expensive, difficult thing to pull off, though. It’s not going to be something a rogue employee does on his off hours."

"A lot of things are, in principle, possible in the long run," says Professor Johan Storm, of the University of Oslo. We have already cloned animals. The question of cloning humans is not a technical one, but an ethical one. "It is not regarded at the moment as being ethically sound," Storm says, "but that could change in the future. Heart transplants were regarded as ethically dubious many years ago, when people thought the heart was important to the personality, so all these things may change over time."

Advertisement

Professor Storm also works with the Human Brain Project, a flagship European research project that was founded, he says, "both to understand the human brain but also to use principles from brain research to design better technology".

The question of consciousness is at the heart of what he does just as it is at the heart of much science fiction. From the replicants of Blade Runner to the digital clones of Black Mirror, we watch characters who seem to be able to think and feel and comprehend, even though they are not human. The torture experienced by simulated characters in science fiction troubles us particularly when those characters seem conscious.

"I’ve seen things you people wouldn’t believe," Rutger Hauer’s replicant says in his famous dying speech in Blade Runner, and part of what makes the speech so moving is the mixture of what is artificial and what seems to be human, the consciousness of what a life has consisted of and what it has meant, even to a non-human being.

But our understanding of consciousness remains exploratory, and we still do not understand how it arises, even if we feel as though we might be able to recognise it in someone or something else. We can look at an animal – if not a robot – and feel that, surely, it has some level of conscious thought, but we don’t know it for sure.

Once the domain of philosophers, consciousness is now – like many philosophical topics before it – being seriously investigated by scientists. Neuroscientists like Johan Storm take measurements from brains and brain tissue and condense that knowledge into mathematical models that can be run on computers and which can then simulate brain activity at different levels.

Advertisement

Professor Storm points to two particularly influential scientific theories of consciousness: the global workspace theory (GWT) and the integrated information theory (IIT). GWT says, in Storm’s words, that consciousness seems to happen "when you involve a large part of the cerebral cortex, so that there is a sort of recognition process that engages a very large part of the brain, that acts in an integrated manner".

Integrated information theory basically says that consciousness occurs whenever there is, as Professor Storm puts it, a "very high level of integrated information in the system. So from that, you could deduce that it could happen in a computer if the conditions are right".

But, he adds, "Most computers are not built according to principles that would give rise to a high level of consciousness… The size of computers is very different to the size of our brain, and they don’t have that degree of feedback and information that the human brain will have. According to that theory, you would have to specially build computers that have the requirements for high levels of consciousness."

Credit: Netflix

"Maybe consciousness will emerge when our machines get sufficiently complex," Toby Walsh, one of the world’s leading researchers in artificial intelligence and the author of Android Dreams, a compelling history of the subject, told me.

Equally, he added, "We may end up finding that machines never have consciousness, that it’s peculiar to biology and that the intelligence we build is what David Chalmers would call a 'zombie intelligence': it’s highly intelligent, but there’s no one there. It can play perfect games of Go, but it doesn’t have that awareness that we feel we have."

Advertisement

At the end of his book, Walsh makes ten predictions for the future of AI. The final prediction is that we will live on after we die in a digital sense, something that has been explored in more than one episode of Black Mirror. "We will be increasingly handing over responsibility to digital assistants, who will sound like us because they will be trained on our human voice, and they will learn all our preferences and they will speak like us, so they will in some sense become indistinguishable from us," he told me.

The Terasem Movement, a research foundation that aims to "transfer human consciousness to computers and robots", is working toward just such a future.

As its director, Bruce Duncan, explained to me, Terasem has thousands of mind files uploaded by individuals who wanted to log their memories, attitudes, values and so on. "Given enough salient information about a human being," Duncan said, "it might someday be possible to make a good enough copy of them, like making a digital recording of a live music performance. We’d be the first to say that it would probably never be anything like the original, but it might have value."

Martine Rothblatt, the founder of Terasem, has had a replica of her wife made. Called Bina48, it is one of the world’s most advanced robots and has even had a conversation with the New York Times. Rothblatt has spoken about her foundation’s technology as being able to capture "people’s mannerisms, personality, recollections, feelings, beliefs, attitudes and values, and keep them alive forever in a cyber form, looking forward to the future when their minds might be downloaded back into a regenerated human body".

Advertisement

Bruce Duncan talked to me about how what companies like Terasem were doing could lead to a re-defining of death, that we might come to see death as having many stages and that the final stage would not be biological death, but when all digital traces of you were gone.

However well-intentioned this is, it still seems uncomfortable, the people we love lingering in ever more real forms long after they die. "I remember turning off a relative’s answering phone, thinking it was the last thing we had of them," Toby Walsh told me. "It’s going to be much more difficult when we have these digital selves."

Many of the world’s biggest companies are now tech companies, which means that money, power and political influence are likely to shape the future of artificial intelligence. "I think the default outcome is a worse outcome for most people: concentration of power and wealth into the hands of the few," says Toby Walsh, who is active in trying to use AI – something he refers to as a "morally neutral" technology – for good.

Scientists like Johan Storm and Toby Walsh stress the importance of regulation and democratic control over AI. The nightmarish scenarios we see in Black Mirror still come back to something that has been around forever: the misuse and abuse of power. Even by logging into Facebook we are signing over a part of us, creating an online self that belongs not to us but to a multi-national conglomerate. Even if we are a century away from fully convincing digital clones, the question of our online selves is a clear and pressing one.

"All this new technology," David Berman writes in his poem "Self Portrait at 28", "will eventually give us new feelings / that will never completely displace the old ones / leaving everyone feeling quite nervous / and split in two."

@OscarRickettNow

UPDATE 15/1/18: An extra quote from Professor Johan Storm was added to clarify the meaning of integrated information theory, as the definition in the original version of this article was misleading.