Tech

What If One Country Achieves the Singularity First?

Zoltan Istvan is a futurist, author of The Transhumanist Wager, and founder of and presidential candidate for the Transhumanist Party. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.

The concept of a technological singu​larity is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it’s a moment when growing superintelligence renders our human models of understanding obsolete. Google’s Ray Kurzweil says it’s “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Kevin Kelly, founding editor of Wired, says, “Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes.” Even Christian theologians have chimed in, sometimes referring to it as “the rapture of the nerds.”

Videos by VICE

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

That also makes a technological singularity something quasi-spiritual, since anything beyond understanding evokes mystery. It’s worth noting that even most naysayers and luddites who disdain the singularity concept don’t doubt that the human race is heading towards it.

No matter how you look at this, it’s bizarre futurist stuff

In March 2015, I published a Motherboard article titled A Global Arms Race to Create a Superintelligent AI is Looming. The article argued a concept I call the AI Imperative, which says that nations should do all they can to develop artificial intelligence, because whichever country produces an AI first will likely end up ruling the world indefinitely, since that AI will be able to control all other technologies and their development on the planet.

The article generated many thoughtful comments on Red​dit Futurology, Less​Wrong, and elsewhere. I tend not to comment on my own articles in an effort to stay out of the way, but I do always carefully read comment sections. One thing the message boards on this story made me think about was the possibility of a “nationalistic” singularity—what might also be called an exclusive, or private singularity.

If you’re a technophile like me, you probably believe the key to reaching the singularity is two-fold: the creation of a superintelligence, and the ability to merge humans with that intelligence. Without both, it’s probably impossible for people to reach it. With both, it’s probably inevitable.

Currently, the technology to merge the human brain with a machine is already underway. In fact, hundreds of thousands of people around the world already have brain implants of some sort, and last year telepathy was performed between researchers in different countries. Thoughts were passed from one mind to another using a machine interface, without speaking a word.

Fast forward 25 years in the future, and some experts like Kurzweil believe we might already be able to upload our entire consciousness into a machine. I tend to agree with him, and I even think it could occur sooner, such as in 15 to 20 years time.

Here’s the crux: If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

As insane as this sounds, it’s possible that the controlling nation could start offering its citizens the opportunity to be uploaded fully into machines, in preparation to enter the singularity. Whether there would then be two distinct entities—one biological and one uploaded—for every human who choses to do this is a natural question, and it’s only one that could be decided at the time, probably by governments and law. Furthermore, once uploaded, would your digital self be able to interact with your biological self? Would one self be able to help the other? Or would laws force an either-or situation, where uploaded people’s biological selves must remain in cryogenically frozen states or even be eliminated altogether?

No matter how you look at this, it’s bizarre futurist stuff. And it presents a broad array of challenging ethical issues, since some technologists see the singularity as something akin to a totally new reality or even a so-called digital heaven. And to have one nation or government controlling it, or even attempting to limit it exclusively to its populace, seems potentially morally dubious.

For example, what if America created the AI first, then used its superintelligence to pursue a singularity exclusively for Americans?

(Historically, this wouldn’t be that far off from what many Abrahamic world religions advocate for, such as Christianity or Islam. In both religions, only certain types of people get to go to heaven. Those left behind get tortured for eternity. This concept of exclusivity is the single largest reason I became an atheist at 18.)

Worse, what if a government chose only to allow the super wealthy to pursue its doorway to the singularity—to plug directly into its superintelligent AI? Or what if the government only gave access to high-ranked party officials? For example, how would Russia’s Vladimir Putin deal with this type of power? And it is a tremendous power. After all, you’d be connected to a superintelligence and would likely be able to rewrite all the nuclear arms codes in the world, stop dams and power plants from operating, and create a virus to shut down Wi-Fi worldwide, if you wanted.

And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

Of course, given the option, many people would probably choose not to undergo the singularity at all. I suspect many would choose to remain as they are on Earth. However, some of those people might be keen on acquiring the technology of getting to the singularity. They might want to sell that tech, and offer paid one-way trips for people who want to have a singularity. For that matter, individuals or corporations might try to patent it. What you’d be selling is the path to vast amounts of power and immortality.

Such moral leanings and concepts that someone or group could control, patent, or steal the singularity ultimately lead us to another imperative: the Singularity Disparity. 

The first person or group to experience the singularity will protect and preserve the power and intelligence they’ve acquired in the singularity process—which ultimately means they will do whatever is necessary to lessen the power and intelligence accumulation of the singularity experience for others. That way the original Singularitarians can guarantee their power and existence indefinitely.

In my philosophical novel The Transhumanist Wager, this type of thinking belongs to the Omnipotender, someone who is actively seeking and contending for as much power as possible, and bases their actions on such endeavors.

I’m not trying to argue any of this is good or bad, moral or immoral. I’m just explaining how this phenomena of the singularity likely could unfold. Assuming I’m correct, and technology continues to grow rapidly, the person who will become the leading omnipotender on Earth is already born.

Of course, religions will appreciate that fact, because such a person will fulfill elements of either the Antichrist or the Second Coming of a Jesus, which is important to both the apocalyptic beliefs in Christianity and Isla​m. At least the “End Times” are really here, faith-touters will be able to finally say.

The good news, though, is that maybe a singularity is not an exclusive event. Maybe there can be many singularities.

A singularity is likely to be mostly a consciousness phenomenon. We will be nearly all digital and interconnected with machines, but we will still able to recognize ourselves, values, memories, and our purposes—otherwise I don’t think we’d go through with it. On the cusp of the singularity, our intelligence will begin to grow tremendously. I expect the software of our minds will be able to be rewritten and upgraded almost instantaneously in real time. I also think the hardware we exist through—whatever form of computing it’ll be—will also be able to be reshaped and remade in real time. We’ll learn how to reassemble processors and their particles in the moment, on-demand, probably with the same agility and speed we have when thinking about something, such as figuring out a math problem. We’ll understand the rules and think about what we want, and the best answer, strategy, and path will occur. We’ll get exceedingly efficient at such things, too. And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

What’s important here is the likely fact that we won’t care much about what’s left on Earth. In just days or even hours, the singularity will probably render us into some form of energy that can organize and advance itself superintelligently, perhaps into a trillion minds on a million Earths.

If the singularity occurs like this, then, on the surface, there’s little ethically wrong with a national or private singularity, because other nations or groups could implement their own in time. However, the larger issue is: How would people on Earth protect themselves from someone or some group in the singularity who decides the Earth and its inhabitants aren’t worth keeping around, or worse, wants to enslave everyone on Earth? There’s no easy answer to this, but the question itself makes me frown upon the singularity idea, in exactly the same way I frown upon an omnipotent God and heaven. I don’t like any other single entity or group having that much possible power over another.