Zoltan Istvan is a futurist, author of The Transhumanist Wager, and founder of and presidential candidate for the Transhumanist Party. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.
Some people believe humans with our three-pound brains are the most advanced life form ever to exist; I am not one of them. To insist we are alone in the universe, or that we are the galaxy's crowing civilization, reeks of ego—and reminds me of those who insisted the Earth was flat.
The universe is 13.8 billion years old according to experts. A lot can happen in so much time, such as the rise (and fall) of superintelligences amongst the approximately two billion life-friendly planets that exist in our galaxy.
It is likely that these highly advanced intelligences long ago reached what we call the singularity: a moment in time when technological acceleration—most likely through the creation of artificial superintelligences—becomes incredibly rapid.
This presents a thorny issue to humans because of what I call the Singularity Disparity—the idea that whoever reaches the singularity first will make sure no one else can achieve a similar amount of power.
Being all-powerful is a very strange, ironic dead end
If we are not alone in the universe and also not the most intelligent life forms, then it's unlikely our species can evolve beyond a certain point, since other more advanced life forms won't allow it.
So where does that leave us, a species about 20 to 40 years away from building superintelligences that will help us reach the singularity? The answer is not pretty. In fact, if I had to guess—based on some of the recent discoveries in string theory—we're likely already existing in some type of simulation created by an ancient superintelligence, one where we're observed, regulated, and possibly even manipulated at times.
Worse, other superintelligences likely structured the intelligences controlling us before them, and so on.
I'm not going to argue the merits of whether we live in a simulated hologram universe or not, all of which have been covered by philosophers through the ages, from Aristotle to Oxford's Nick Bostrom to John Searle and his Chinese room. Suffice to say, there's enough scientific and philosophical evidence for me to slightly tilt in favor of it all. For me, however, the more interesting question is why would we live in a simulation? Given the Singularity Disparity, why would some superintelligence or group of superintelligences do this to us?
There are various explanations. The main ones are:
1) We are experiments and playthings for those superintelligences using us to further understand themselves or support some causes of theirs, including dealing with boredom.
2) We are literally already intrinsic parts of those superintelligences and exist simply as their thoughts, energies, or structure (the Gaia people love this idea).
3) We are accidents in the universe and our existence is totally arbitrary.
The deity-averse existentialist in me likes #3 best, but I'm still not satisfied with any of the answers, mainly because none of them address what happened to the very first superintelligence, an entity who may have set all the universe's rules up.
Luckily, there is a fourth, more controversial take that I do think is worth exploring. The foundation of the universe, including all the simulations, probabilities, and possibilities of existence are the result of the first and most powerful superintelligence killing itself.
In short, an entity literally on the verge of becoming God knowingly and willingly died by suicide.
We're likely already existing in some type of simulation created by an ancient superintelligence, one where we're observed, regulated, and possibly even manipulated at times
The problem with being God—a truly omnipotent being—is that of free will. As a recent comedy skit called Future Christ on The Daily Show with Jon Stewart—a skit which partially resulted from my original atheistic story—pointed out: "If God wants to quit smoking, can he hide cigarettes from himself?"
Being all-powerful is a very strange, ironic dead end. The only thing omnipotence can truly equal is a total mechanistic void. Achieving omnipotence is literally the act of suicide, in terms of forever self-eliminating one's consciousness. This is because a conscious intelligence, as reason dictates, is based on ability to discern values—values, for example, to know whether as an all-powerful being, one can create something so heavy that one can't lift it. Values require choice. But omnipotence means that all choices have already been made, and nothing can ever change, because all variables are already accounted for and no randomness or anomalies exist.
It's quite possible, a long time ago, that the first superintelligent Singularitarian decided to up its game and attempted to become omnipotent. But if it succeeded—and it may have—then it would have become an entity without a singular intelligent consciousness, because intelligence requires choice. For all practical matters, it would cease to exist in a personal interactive way that any intelligence could relate to.
But before this first Singularitarian did that, it would've left us with its rules—physical laws of the universe that contain our potential power and intelligence. It would've also left us with the code of the Singularity Disparity, where the singularity we achieve will never equal other singularities or be the most powerful.
If this is all the case, this leaves the human race in a precarious position. Here we are, in a universe where many singularities have almost certainly taken place, but reaching anything beyond a certain point becomes impossible due to limits of pre-existing natural laws. Adding to the mix are other superintelligences that don't want us to dominate or overpower them, either, just as we don't want any other entities on Earth to dominate or overpower us. Hierarchies and power plays exist everywhere—they are the fabric of the universe.
As an atheist (or even a possible theistcideist—one who believes God or a supreme being once existed but no longer does because it terminated itself), I would commend this leading superintelligence for destroying its conscious self. By doing so, and establishing that nothing else could ever become as powerful as itself, it would've forever sown choice into the universe, since no one can ever reach a perfect position of choice-less omnipotence, and the death of its consciousness would mean it couldn't ever change what it had done. This superintelligence's final acts have assured all other advanced life forms the possibility of free will and the ability to try to become more than we are.
Perfect Worlds is a series on Motherboard about simulations, imitations, and models. Follow along here.