Pages

13 September 2016

How likely is an existential catastrophe?

Phil Torres
7 SEPTEMBER 2016

An existential risk refers to any future event that would either trip our species into the eternal grave of extinction or irreversibly catapult us back into the Stone Age. Oxford philosopher Nick Bostrom formalized this concept in 2002, and it has since become a topic of growing interest among both academics and the public.

In the past few decades, the number of existential risk scenarios has risen, and it will likely rise even more this century. Consider that only 72 years ago, prior to the first atomic bomb exploding in the New Mexico desert, Homo sapiens faced only a handful of existential risks—all of them natural—including asteroid and comet impacts, supervolcanic eruptions, and global pandemics. Today the situation is quite different: Anthropogenic risks such as climate change,biodiversity loss, and nuclear weapons now haunt our species. In addition, a swarm ofemerging risks hover on the horizon: an engineered pandemic, a war involving nanotech weapons, self-replicating nanobots, geoengineering, and artificial superintelligence. If this trend continues into the future, we should expect even more existential risk scenarios before the 22nd century.

The increasing number of risk scenarios suggests that the overall probability of disaster may have risen as well. The more landmines placed in a field, the more likely one is to step in the wrong place. According to the best estimates available, the probability of a doomsday catastrophe has indeed increased over the same period of time. For example, an informal survey of 19 experts conducted by the Future of Humanity Institute in 2008 yielded a 19 percent chance of human extinction this century. And Sir Martin Rees, co-founder of the Centre for the Study of Existential Risk at Cambridge University, argues that civilization has no better than a 50-50 chance of making it through the 21st century intact. These estimates are far higher than the probability of doom brought about by any natural phenomenon before the Atomic Age.

Putting risks in perspective. To underline the urgency of existential risks, let’s compare these estimates to the probability of an average American dying in an “air and space transport accident”—which is 1 in 9,737 over the course of a roughly 80-year lifetime. As mentioned above, the Future of Humanity Institute survey participants assigned a 19 percent chance of human extinction by the year 2100, which would mean that the average American is at least 1,500 times more likely to die in a human extinction catastrophe than in a plane crash.

Rees’ estimate concerns civilizational collapse, which is more probable than human extinction. His estimate of a 50 percent chance of collapse this century suggests a 42 percent chance within the average American’s life span. This means that the average US citizen is nearly 4,000 times more likely to encounter the ruination of modern civilization than to perish in an aviation disaster. Even relative to dying in a motor vehicle crash, which has a probability of about 1 in 113, Rees’ figure implies that the average individual is almost 50 times more likely to see civilization collapse than to die in a vehicular accident.

Objective and subjective estimates. These probability estimates are clearly quite high—suspiciously high. The obvious question is: How rigorous are they? I would argue that these estimates, while open to further revision, ought to be taken very seriously.

On the one hand, some existential risks can be assessed quite objectively using empirical data. For example, by studying crater impacts on the Earth and moon, researchers have estimated that an asteroid or comet large enough to cause an existential catastrophe strikes Earth on average once every 500,000 years. Similarly, the geological record tells us that a supervolcanic eruption capable of inducing a “volcanic winter” happens about once every 50,000 years. And climatologists have large amounts of data that they can feed into computer models to predict how the climate will likely respond to greenhouse gas emissions. Surveys of the biological world have also led to the conclusion that human activity has initiated the sixth mass extinction in life’s 3.8-billion-year history.

On the other hand, some existential risks require a dose of subjective speculation. But these speculations need not be arbitrary or haphazard. On the contrary, they can be based on robust scientific knowledge, technological trends, data from the social sciences, and logical inference. Consider the following:

The biological threat. The journal Nature reported in 2014 that the cost of sequencing an average human genome has dropped at a rate that “does not just outpace Moore’s law—it makes the once-powerful predictor of unbridled progress look downright sedate.” Moore’s law describes the exponential growth of computing power over the years, meaning that the cost of genome sequencing is dropping even faster than the explosion of computing power.

This trend is indicative of biotechnological development in general: Laboratory equipment is becoming cheaper, processes are increasingly automated, and the Internet contains a growing number of complete genomes—including the genetic sequences of Ebola and smallpox. The result is that the number of people capable of designing, synthesizing, and dispersing a weaponized microbe will almost certainly increase in the coming decades. Thus biotechnology (and its younger sibling, synthetic biology) will likely become a more significant risk later this century.

The nanotechnology threat. Similar considerations apply to nanotechnology. Consider the nanofactory, a hypothetical device that could manufacture products with absolute atomic precision for a fraction of the cost of current manufacturing. “Atomic precision” here means that two objects produced by a nanofactory—for example, two computers of the same design—would be identical with respect to not only their macroscopic properties, but also the precise placement of their constituent atoms. It remains unclear whether nanofactories are physically possible, but if they are—as theorists like Eric Drexler of the Future of Humanity Institute and Ralph Merkle of Singularity University claim—the consequences for humanity would be profound.

There are only three resources required to operate a nanofactory: power, design instructions (downloaded from the Internet), and a simple feedstock molecule such as acetone or acetylene. With these three conditions met, terrorist groups and lone wolves of the future could potentially manufacture huge arsenals of conventional and novel weaponry, perhaps eluding detection by law enforcement or international regulatory bodies. Nanofactories might even be capable of making nuclear weapons, although at present this possibility is uncertain.

Widespread use of nanofactories to produce ordinary commodities could also result in the dissolution of trade relations between countries. This could be dangerous because, as Harvard University psychologist Steven Pinker wrote in 2011, “countries that depended more on trade in a given year were less likely to have a militarized dispute in the subsequent year.” Finally, advanced nanotechnologies could introduce new nanoparticles to the biosphere, some of which could prove extremely toxic.

An even more speculative threat involves the intentional design of autonomous “nanobots” that would convert all the matter in their vicinity into clones of themselves. The result would be a positive feedback effect that could destroy the entire biosphere in as little as 90 minutes, according to a 2006 calculation by Ray Kurzweil. This particular calculation is contentious, but one doesn’t need to accept Kurzweil’s conjecture to take the possibility of “grey goo” seriously, as many existential risk scholars do.

The superintelligence threat. As for superintelligence, a type of artificial intelligence that would surpass that of even the smartest humans, this too is speculative. But Bostrom’s recent work on the topic suggests that a superintelligent machine with values that are even slightly misaligned with ours—even when the corresponding goals appear benign—could be disastrous.

One of the most discussed examples involves a superintelligence programmed to “maximize” the abundance of some object—say, a paperclip. This could lead the superintelligence to harvest the available atoms in human bodies, thereby destroying humanity (and perhaps the entire biosphere). In addition, there are multiple ways that a superintelligence could become outright malevolent toward humanity, as University of Louisville computer scientist Roman Yampolskiy outlines in a recent paper.

The value alignment problem is made even more dangerous by the possibility that a superintelligence’s thought processes could run millions of times faster than ours, given the vastly different speed of electrical potentials in computer hardware versus action potentials in the human brain. A superintelligence could also learn to rewrite its own code, thereby initiating an intelligence explosion until some upper limit—perhaps far above human intelligence—is finally reached.

Note that the existential risk posed by superintelligence does not depend on how soon one is created; it merely concerns what happens once this occurs. Nonetheless, a 2014survey of 170 artificial intelligence experts by Anatolia College philosopher Vincent C. Müller and Bostrom suggests that superintelligence could be on the horizon. The median date at which respondents gave a 50 percent chance of human-level artificial intelligence was 2040, and the median date at which they gave a 90 percent probability was 2075. If they are correct, some people around today will live to see the first superintelligence—which, as British mathematician I. J. Good observed in 1966, will likely be our last invention.

The human threat. As unprecedentedly powerful technologies are becoming more accessible, the global population is growing, meaning that the absolute number of malicious agents could increase proportionally. According to the American psychologist Martha Stout, roughly 4 percent of the global population are sociopaths. This translates to about 296 million sociopaths today, and if the population rises to 9.3 billion by 2050, this number will increase to 372 million. Although not all sociopaths are violent, they are disproportionately represented among groups such as prison inmates and dictators. It follows that this demographic could seriously jeopardize our collective future if nuclear weapons, biotechnology, nanotechnology, or some as-yet-unknown technology were to fall into the wrong hands.

The menace posed by ideological extremism is also growing. For example, the number of hate groups in the United States rose from 457 to 892 between 1999 and 2015. Outside the United States, the number of Salafi-jihadist organizations rose from 3 in 1988 to 49 in 2013, the year before the Islamic State emerged as arguably the largest terrorist organization in human history.

As I’ve outlined elsewhere, there are strong reasons for expecting the total population of radical extremists of all political and religious persuasions to continue increasing, due in part to the conflict-multiplying effects of global catastrophes like climate change and biodiversity loss. If empowered by advanced technologies, any one of these individuals or groups could wreak unprecedented havoc on society.

Closer to midnight. Our species has never before in its 200,000-year history been so close to a disaster as we are this century. It’s unsettling enough that the Doomsday Clock has been set to an ominous 3 minutes to midnight (or doom) since 2015. But the real gravity of our situation only comes into focus once one realizes that before 1945, there was no need for the Doomsday Clock in the first place, given the low probability of doom.

While our ability to quantify the dangers posed by emerging risks is more subjective than in the case of asteroid impacts and supervolcanic eruptions, there are legitimate reasons for existential concern about both. Insofar as the estimates from experts are based on the considerations outlined above, they ought not to be ignored by anyone with an interest in our species’ continued survival. Yet most people today are far more worried about improbable mundane dangers like plane crashes—events that have neither global nor transgenerational implications—than existential risks. This is worrisome because recognizing problems typically precedes solving them.

If scholars, scientists, political leaders, and other citizens of the global village fail to see the unique risks of this century, civilization will remain unnecessarily vulnerable to a catastrophe.

No comments:

Post a Comment