Why an Existential Risk Expert Finds Hope in Humanity’s Certain Doom

But we need to get our act together

Credit: Morley Read/Getty Images

Every year since 2016, a group of students at Princeton University have hosted a conference called “Envision.” The 2019 conference included speakers like Susan Schneider, Anders Sandberg, Ruha Benjamin, Aubrey de Grey, Michele Reilly, and Francesca Ferrando. I gave one of the opening talks on the first day of the conference, focusing on the potential downsides of continued technological development. This is that talk.

LLet’s start with some historical context. Since time immemorial, there have been people in every generation who have wildly waved their arms in the air and vociferously shouted that theirs is the last — that the end is nigh. Some scholars believe that the ancient religion Zoroastrianism, which may have introduced the very first linear conception of time, began as an apocalyptic movement. And ever since a book by the theologian and polymath Albert Schweitzer was published in 1906, many New Testament scholars have believed that Jesus was an apocalypticist who expected the world’s end in his lifetime. When this failed to happen, Jesus tried to force the hand of God by, essentially, committing suicide on the cross. Thus, he shouts just before dying, “My God, my God, why have you forsaken me?” Scholars of Islam have similarly speculated about the possibility that Muhammad anticipated the end of the world during his lifetime or shortly thereafter.

There have also been many, many apocalyptic individuals and groups throughout time — far more than most people, even historians, recognize. The list includes the Millerites of the 19th century, the Anabaptists, the Japanese doomsday cult Aum Shinrikyo, and The Seekers. In fact, the last was studied by a psychologist who, because of his observations of the group following a failed prophecy, coined the term “cognitive dissonance.”

Since the 18th century Enlightenment, but especially over the past century, there has been an epistemological shift in thinking about the long-term fate of humanity. This has moved futurological thinking away from faith and revelation and toward evidence and reason. It has attempted to make eschatology — or the study of the end of the world — a thoroughly scientific discipline.

We can talk about our extinction happening tomorrow, but not about it having happened yesterday.

The result has been the formation of a new field of inquiry called “existential risk studies.” As the name implies, the central concept of this field is that of existential risk. What does this term mean? There are many definitions in the scholarly literature. The most intuitive is that existential risks are risks of human extinction. Another is that the term refers to instances of civilizational collapse. However, the most canonical definition comes from a 2002 paper by Nick Bostrom, in which he defines an existential risk as an event that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential for desirable future development.

On this account, an existential risk can only happen once in a species’ lifetime. This means that we must be proactive rather than reactive in preventing one from happening. We can talk about our extinction happening tomorrow, but not about it having happened yesterday. Hence, if a single existential catastrophe were to occur, the game would be over, and humanity would have lost.

Unfortunately, very little research to date has focused on existential risks and related phenomena. As Bostrom points out in a 2013 paper, there are more studies of dung beetles than of human extinction, and as I noted in 2018, Google Scholar returns millions more results for the term “cancer” than “existential risk,” which is a bit ironic given that curing cancer doesn’t much matter if there’s no one around to cure.

One reason the topic has been neglected, as I explain in a forthcoming book with Tom Moynihan at Oxford, titled A Brief History of Human Extinction, is that the very concept of human extinction in the modern biological sense is quite recent in its origin. For much of human history, the notion that humanity could disappear forever, like the dodo or non-avian dinosaurs, was quite literally unthinkable. Another reason might simply be that contemplating secular apocalyptic scenarios can be a rather depressing task. As a 2015 study notes, people tend to have three responses when confronted with end-of-the-world scenarios: fundamentalism, nihilism, and activism. What we need today aren’t fundamentalists and nihilists, but activists who respond to the grave dangers facing us with an eagerness to improve global security.

Let’s now consider the sources of existential risk. The most rudimentary distinction here is between natural and anthropogenic risks. I like to bundle risks from nature under the term “cosmic risk background,” which is modeled on the term “cosmic microwave background” from cosmology.

One such risk arises from supervolcanoes. These erupt, on average, once every 50,000 years and can inject sulfate aerosols into the stratosphere, above the weather, where they disperse around the globe, blocking out incoming solar radiation. If this were to occur tomorrow, it could cause worldwide agricultural failures, and thus mass starvation and possibly even human extinction. Similarly, large asteroids and comets can strike Earth and catapult dust into the stratosphere that also blocks out sunlight, thereby causing the same catastrophic outcome. And natural pandemics are a growing worry in our interconnected world, where a deadly germ can travel from one continent to another at the speed of a jetliner.

There is also an assortment of highly improbable risks associated with gamma-ray bursts, galactic center outbursts, supernovae, black hole mergers, and so on, as well as future risks arising from our aging sun becoming a Red Giant, thereby expanding and becoming more luminous. And, in some 10⁴⁰ years, all protons in the universe will decay. This marks a hard limit to the lifetime of our evolutionary lineage.

Moving now to the anthropogenic risks, the most obvious is climate change. According to the Intergovernmental Panel on Climate Change, or IPCC, the consequences of climate change will be “severe,” “pervasive,” and “irreversible.” Or as a recent warning issued by more than 11,000 scientists from around the world put it, unchecked climate change will result in “untold suffering.”

One consequence of climate change is biodiversity loss, which has initiated the sixth major mass extinction event in life’s 3.8-billion-year history on our pale blue dot. This — not skyscrapers, asphalt roads, or massive landfills — will be our greatest legacy on Earth. According to the 2018 Living Planet Report, the population of wild vertebrates from 1970 to 2014 declined by an unfathomable 60%. You are welcome to extrapolate this trend into the future. In fact, a 2006 study did exactly this, concluding that there will be virtually no more wild-caught seafood by 2048. The oceans will be empty because of us, with the lives of millions of people who depend on fish as their primary source of protein in the balance. To paraphrase the former environmental minister of Romania, we could be the first species in Earth’s history to document its own self-extinction.

There are also the dangers posed by nuclear weapons. The greatest concern here isn’t so much an intentional strike — although tensions between India and Pakistan have been worrisome — but an accidental launch. There were, in fact, numerous near-misses during and after the Cold War that, if time were rewound and played again with only slightly different initial conditions, would have almost certainly brought about a nuclear holocaust. Incidents involving Vasily Arkhipov in 1962, a day before the Cuban missile crisis ended, and Stanislav Petrov in 1983, come to mind. These are men who quite literally saved the world.

With respect to emerging technologies like CRISPR/Cas9, gene drives, future nanotechnologies, and so on, the primary concern relates to the fact that these technologies are not only enabling humanity to manipulate and rearrange the physical world in unprecedented ways, but also becoming increasingly accessible to small groups and even lone wolves.

Consequently, they are multiplying the total number of actors with access to what we can call a “doomsday button,” or a button that, if pushed, would wreak civilizational havoc. As Lord Martin Rees likes to say, “in a global village there will be global village idiots. And with this power” — that is, the unprecedented offensive capabilities that dual-use emerging technologies will distribute across society — “just one could be too many.” Whereas terrorists in the past had shovels, the malicious agents of tomorrow will have bulldozers to dig mass graves for their victims.

Perhaps you might think that no one would be so foolish or insane to intentionally destroy the world. But I have elsewhere cataloged at least four categories of agents who would almost certainly push a doomsday button if only one were within finger’s reach. These are apocalyptic terrorists like Aum Shinrikyo and the Islamic State; omnicidal eco-terrorists who believe that the biosphere would be better off without us; individuals motivated by ethical systems like “radical” negative utilitarianism, which prescribes the elimination of all suffering; and what I call “idiosyncratic actors,” or agents driven by beliefs that are peculiar to themselves.

For example, a rampage shooter who participated in a 1999 massacre that killed 12 people repeatedly fantasized in his diary about causing human extinction. As he wrote, “in case you haven’t figured it out yet, I say ‘KILL MANKIND’ no one should survive.” And I have a colleague who knew two people who entered PhD programs in biology for the express purpose of learning how to synthesize designer pathogens to wipe out humanity. There really are, as the great science popularizer, Carl Sagan put it, “madmen” out there. The fingers of such madmen will pose extraordinary risks to humanity in the coming decades, as doomsday buttons proliferate.

Skipping over scenarios associated with nanotechnology, physics disasters, extraterrestrial invasions, and a simulation shutdown, many existential risk scholars argue that the greatest long-term threat to humanity is machine superintelligence. The key worry isn’t that the A.I. will be malicious, malign, malevolent, or evil. Rather, it’s much more menacing: that the A.I. will have misaligned values and be able to, by virtue of its superhuman intelligence, outsmart us. The “default outcome” of this scenario, as Bostrom writes in his 2014 book Superintelligence, could very well be “doom.”

Furthermore, a small but growing group of scholars, including myself and Daniel Deudney at Johns Hopkins University, think that space colonization could actually increase the likelihood of catastrophe rather than decrease it, as Stephen Hawking and Elon Musk have argued.

And finally, given that most of the existential risks that haunt us today were unknown to people just a few decades ago — including natural risks like supervolcanoes and asteroid impacts, which weren’t accepted by the scientific community until the 1980s and 1990s — we have inductive reason for suspecting that there could be any number of additional risks lurking in the cosmic shadows that we cannot now even imagine. Perhaps we are as ignorant of them as Darwin was of nanotechnology and particle accelerators. I have somewhat playfully called such “unknown unknowns” monsters, of which there are several types that I won’t here discuss.

The picture that emerges from these considerations is very worrisome indeed. As Hawking declared in a 2016 Guardian op-ed, “we are at the most dangerous moment in the development of humanity.” This comports with estimates given by a number of reputable scientists and philosophers. For example, Lord Martin Rees writes in his 2003 book Our Final Hour that civilization has a mere 50/50 chance of making it through the present century. That’s a coin toss: if the coin lands “heads,” civilization survives. If it lands “tails,” the infrastructure of civilization crumbles and the human population implodes, leaving survivors in conditions similar to those of the Paleolithic, except much more impoverished. Fortunately, whether the coin lands heads or tails is ultimately up to you and I; the Democrats and Republicans; the United States, Russia, China, Australia, and Brazil; the human species.

Rees’s dismal statement comports with probability estimates offered by others who study the topic. For example, an informal survey of experts during a 2008 conference hosted by the Future of Humanity Institute at Oxford University, yielded a median estimate of extinction before the 22nd century of 19%. Along these lines, the iconic Doomsday Clock, created in 1947 by the venerable Bulletin of the Atomic Scientists, is currently set to two minutes before midnight. Here midnight corresponds to a global catastrophe. The clock was originally set to seven minutes before midnight, and moved all the way to 17 minutes in 1991, when the Soviet Union collapsed and the Cold War came to an official end. The only other time the minute hand has been this close to midnight was in 1953, after the United States detonated the first thermonuclear weapon, which made the atomic bombs dropped on the Japanese archipelago in 1945 look like bang snaps.

Note here that the overwhelming source of risk arises not from nature but our own actions and ingenuity. In fact, the chance of extinction per century from natural causes is probably on the order of 1 in 500. Although, as Neil deGrasse Tyson once said, “the universe is a deadly place. At every opportunity, it’s trying to kill us, and so is Earth,” it turns out that humanity is far more suicidal than the universe is homicidal.

So far we have focused on mostly empirical reasons for why the “risk potential” of the world today is almost certainly greater than at any previous moment. But there are also a priori considerations like the Doomsday Argument. Its conclusion is that we are systematically underestimating the probability of extinction. The best way to explain the argument is analogically. Imagine that you have two urns in front of you. In the first, there are ten balls numbered one through 10. In the second, there are 1,000 balls numbered one through 1,000. Your task is to reach into an urn, pick a ball, look at the number, and guess which urn it came from. Imagine you do this and pick a ball numbered seven. It is far more likely that you picked a ball from the first rather than the second urn.

Now consider two hypotheses about the total number of human beings that have ever existed. So far, between 60 and 100 billion people have lived and died on this oblate spheroid, Earth, over the past 300,000 years. The first hypothesis says that there will be 150 billion in total; the second that there will be 100 trillion. If you treat yourself as a random sample of all humans who will ever exist, then it’s much more likely that the first hypothesis is true than the second. It follows that doom is likely to happen sooner rather than later, so whatever your estimate of annihilation is based on empirical considerations, you should increase it.

The field would benefit greatly from an influx of researchers, young and old, with differing views on what we should value, in the context of existential risk, and why.

You might think there’s something suspicious about this line of reasoning. In fact, many people, upon first hearing the argument, claim to see some obvious flaw. But it turns out to be quite hard to actually refute. Suffice it to say that the Doomsday Argument is predicated on one of the two most influential theories about what are called “self-locating beliefs,” or beliefs about where you exist in space and time when you can’t just look around and figure it out. The problem is that if one accepts the other influential theory, there is an even stronger argument for why doom is imminent. In either case, the conclusion is bleak: Our continued survival in the universe is much more precarious than simply listing all the existential risks facing us implies.

Franz Kafka once quipped that “there is an infinite amount of hope in the universe... but not for us.” I believe this is wrong, if interpreted literally. The fact is that there may not be a single existential risk facing humanity that we cannot effectively mitigate, if not completely eliminate. The question is whether humanity will rise to the challenge — whether our individual and collective wisdom will prove sufficient to slalom through the obstacle course of existential hazards before us.

Integral to this project is, I believe, exploring a wide range of perspectives on the topic. Since the early 2000s, the field of existential risk studies has been dominated by well-off white men, who tend to have a particular, and limited, well-off white-male perspective. I could provide many egregious examples. Hence, the field would benefit greatly from an influx of researchers, young and old, with differing views on what we should value, in the context of existential risk, and why. There is, in fact, some evidence that groups are more “collectively intelligent” the more women they include. Thus, insofar as existential risk studies is a group effort, it would benefit from greater gender diversity.

There is every reason to believe that the future could be unimaginably and spectacularly marvelous. As H.G. Wells eloquently put it in 1902, “we are creatures of the twilight.” But we need to get our act together. There is still plenty of time for young people, to make a profound difference. Perhaps a difference that enables humanity to dodge disaster.

Author and scholar of existential threats to humanity and civilization. www.xriskology.com. @xriskology

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.