The Looming Extinction of Humankind, Explained
Image: krheesy/Flickr

FYI.

This story is over 5 years old.

Tech

The Looming Extinction of Humankind, Explained

Fortunately, advanced technologies could give us a way out, if we let them.

For most people, driving with a seat belt tightly strapped around their bodies is a smart habit. Not only is racing down the highway without it illegal—"click it or ticket," as the slogan goes—but seat belts also "reduce serious crash-related injuries and deaths by about half." Yet as we've previously estimated, your chances of dying in a car crash are at least 9.5 times lower than dying in a human extinction event.

Advertisement

If this sounds incredible—and admittedly, it does—it's because the human mind is susceptible to cognitive biases that distort our understanding of reality. Consider the fact that you're more likely to be killed by a meteorite than a lightning bolt, and your chances of being struck by lightning are about four times greater than dying in a terrorist attack. In other words, you should be more worried about meteorites than the Islamic State or al-Qaeda (at least for now).

The calculation above is based on an assumption made by the influential "Stern Review on the Economics of Climate Change," a report prepared for the UK government that describes climate change as "the greatest and widest-ranging market failure ever seen." In making its case that climate change should be a top priority, the Stern Review stipulates a 0.1 percent annual probability of human extinction.

This number might appear minuscule at first glance, but over the course of a century it yields a whopping 9.5 percent probability of our species going extinct. Even more, compared to estimates offered by others, it's actually quite low. For example, a 2008 survey of experts put the probability of human extinction this century at 19 percent. And the co-founder of the Centre for the Study of Existential Risk, Sir Martin Rees, argues that civilization has a 50:50 chance of making it through the current century—a mere coin toss!

Advertisement

How could the probability of a global disaster be so much greater than that of dying in a car accident?

How is this possible? How could the probability of a global disaster be so much greater than that of dying in a car accident? To be sure, these estimates could be wrong. While some existential risks, such as asteroid impacts and super-volcanic eruptions, can be estimated using objective historical data, risks associated with future technologies require a good dose of speculation. Nonetheless, we know enough about certain technological trends and natural phenomena to make at least some reasonable claims about what our existential situation will look like in the future.

There are three broad categories of "existential risks," or scenarios that would either cause our extinction or permanently catapult us back into the Stone Age. The first includes natural risks like asteroid and comet impacts, super-volcanic eruptions, global pandemics, and even supernovae. These form our cosmic risk background and, as just suggested, some of these risks are relatively easy to estimate.

As you may recall from middle school, an assassin from the heavens, possibly a comet, smashed into the Yucatan Peninsula 66 million years ago and killed almost all of the dinosaurs. And about 75,000 years ago, a super-volcano in Indonesia caused the Toba catastrophe, which some scientists believe dramatically reduced the human population, though this claim is controversial. Few people today realize just how close humanity may have come to extinction in the Paleolithic.

Advertisement

Although the "dread factor" of pandemics tends to be lower than wars and terrorist attacks, they have resulted in some of the most significant episodes of mass death in human history. For example, the 1918 Spanish flu killed about 3 percent (though some estimates are double that) of the human population and infected roughly a third of all humans between 1918 and 1920. In absolute numbers, it threw roughly 33 million more people into the grave than all the bayonets, bullets, and bombs of World War I, which lasted from 1914 to 1918. And based on CDC estimates, the fourteenth-century Black Death, caused by the bubonic plague, could have taken approximately the same number of lives as World War II, World War I, the Crusades, the Mongol conquests, the Russian Civil War, and the Thirty Years' War combined. (Take note, anti-vaxxers!)

The second category of existential risks concerns advanced technologies, which could cause unprecedented harm through "error or terror." Historically speaking, humanity created the first anthropogenic risk in 1945 when we detonated an atomic bomb in the New Mexico desert. Since this watershed event, humanity has lived in the flickering shadows of a nuclear holocaust, a fact that led a group of physicists to create the Doomsday Clock, which metaphorically represents our collective nearness to disaster.

While nuclear tensions peaked during the Cold War—President Kennedy even estimated that the likelihood of nuclear war at one point was "between 1 in 3 and even"—the situation improved significantly after the Iron Curtain fell. Unfortunately, US-Russian relations have recently deteriorated, leading Russian Prime Minister Dmitry Medvedev to suggest that, "we have slid back to a new Cold War." As we write this, the Doomsday Clock is set to a mere three minutes before midnight—or doom—which is the second closest it's been to midnight since its creation in 1947.

Advertisement

While nuclear weapons constitute the greatest current risk to human survival, they may be among the least of our concerns by the end of this century. Why? Because of the risks associated with emerging fields like biotechnology, synthetic biology, and nanotechnology. The key point to understand here is that these fields are not only becoming exponentially more powerful, but their products are becoming increasingly accessible to groups and individuals as well.

For example, it's increasingly possible for nonexperts to cobble together a makeshift gene-editing laboratory. The affordability of home-built labs is being driven in part by the biohacking movement, which aims to empower interested hobbyists by making inexpensive, automated equipment readily available. DNA material can also be ordered from commercial providers, as journalists for the Guardiandiscovered in 2006 when they managed to acquire "part of [the] smallpox genome through mail order." Even more, anyone with an internet connection can access databases that contain the genetic sequences of pathogens like Ebola. We're a long way from programming organisms' DNA the way we program software. But if these trends continue (as they likely will), terrorists and lone wolves of the future will almost certainly have the ability to engineer pandemics of global proportions, and perhaps even more devastating than anything our species has previously encountered.

Advertisement

As for nanotechnology, the most well-known risk stems from what's called the grey goo scenario. This involves tiny self-replicating machines, or nanobots, programmed to disassemble whatever matter they come into contact with and reorganize those atoms into exact replicas of themselves. The resulting nanorobotic clones would then convert all the matter around them into even more copies. Because of the exponential rate of replication, the entire biosphere could be transformed into a wriggling swarm of mindlessly reproducing nanobots in a relatively short period of time.

Alternatively, a terrorist could design such nanobots to selectively destroy organisms with a specific genetic signature. An ecoterrorist who wants to remove humanity from the planet without damaging the global ecosystem could potentially create self-replicating nanobots that specifically target Homo sapiens, thereby resulting in our extinction.

Perhaps the greatest long-term threat to humanity's future, though, stems from artificial superintelligence. As one of us recently wrote, instilling values in a superintelligent machine that promote human well-being could be surprisingly difficult. For example, a superintelligence whose goal is to eliminate sadness from the world might simply exterminate Homo sapiens, because people who don't exist can't be sad. Or a superintelligence whose purpose is to help humans solve our energy crisis might inadvertently destroy us by covering the entire planet with solar panels. The point is that there's a crucial difference between "do as I tell you" and "do as I intend you to do," and figuring out how to program a machine to follow the latter poses a number of daunting challenges.

Advertisement

This leads to the final category of risks, which includes anthropogenic disasters like climate change and biodiversity loss. While neither of these are likely to result in our extinction, they are both potent "conflict multipliers" that will push societies to their limits, and in doing so will increase the probability of advanced technologies being misused and abused.

To put this in stark terms, ask yourself this: is a nuclear war more or less likely in a world of extreme weather, mega-droughts, mass migrations, and social/political instability? Is an eco-terrorist attack involving nanotechnology more or less likely in a world of widespread environmental degradation? Is a terrorist attack involving apocalyptic fanatics more or less likely in a world of wars and natural disasters that appear to be prophesied in ancient texts?

But this isn't the end of the story. There's also ample reason for optimism.

Climate change and biodiversity loss will almost certainly exacerbate current geopolitical tensions and foment entirely new struggles between state and nonstate actors. This is not only worrisome in itself, but with the advent of advanced technologies, it could be existentially disastrous.

It's considerations like these that have lead the experts surveyed above, Rees, and other scholars to their less-than-optimistic claims about the future. The fact is that there are far more ways for our species to perish today than ever before, and the best current estimates suggest that dying from an existential catastrophe is more likely than dying in a car accident. Even more, there are multiple reasons for anticipating that the threat of terrorism will nontrivially increase in the coming decades, due to the destabilizing effects of environmental degradation, the democratization of technology, and the growth of religious extremism worldwide.

Advertisement

But this isn't the end of the story. There's also ample reason for optimism. While the existential risks confronting our species this century are formidable, not a single one is insoluble. Humanity has the capacity to overcome every danger that lines the road before us. For example, advanced technologies could also mitigatethe risks posed by nature. A kamikaze asteroid barreling towards Earth could be deflected by a spacecraft or (perhaps) blown to smithereens by a nuclear bomb. Developments like space colonization and underground bunkers could enable humanity to survive a catastrophic asteroid impact or super-volcanic eruption. As for pandemics, recent incidents like the Ebola and SARS outbreaks have shown that scientists working with the international community can effectively contain the spread of pathogenic microbes that might otherwise have caused a global disaster.

Other risks like climate change and biodiversity loss could be solved by reducing population growth, switching to sustainable energy sources, and preserving natural habitats.

This leaves technological risks, which society could potentially neutralize by implementing policies and regulations intended to keep dangerous weapons out of the hands of criminals, psychopaths, and terrorists. It's unclear, though, how effective such strategies could be, and this is in part why many experts see the biggest future threats as being associated with advanced technologies. Fortunately, organizations like the X-Risks Institute, Future of Life Institute, Future of Humanity Institute, and Centre for the Study of Existential Risks are working hard to ensure that a worst-case scenario for our species never occurs.

The cosmos is a vast obstacle course of life-threatening dangers. And while our extraordinary success as a species has improved the human condition greatly, it's also introduced a host of novel existential risks our species has never before encountered—and thus has no track record of surviving. Nonetheless, there are clear, concrete actions humanity can take to mitigate the threats before us and lower the probability of an existential catastrophe. As many leading experts have confirmed, the future is overflowing with hope, but realizing this hope requires us to take a sober look at the very real dangers all around.

Phil Torres is an author, contributing writer at the Future of Life Institute, and founding Director of the X-Risks Institute. His most recent book is called The End: What Science and Religion Tell Us About the Apocalypse. Follow him on Twitter: @xriskology.

Peter Boghossian is an assistant professor of philosophy at Portland State University. He is the author of A Manual for Creating Atheists and creator of the app Atheos. Follow him on Twitter: @peterboghossian.