End Times

A Brief Guide to the End of the World

Humanity faces existential peril as never before. But whether or not we go extinct is up to us.

Illustration: Jon Han

Asteroids, supervolcanoes, nuclear war, climate change, engineered viruses, artificial intelligence, and even aliens — the end may be closer than you think. For the next two weeks, OneZero will be featuring essays drawn from editor Bryan Walsh’s forthcoming book End Times: A Brief Guide to the End of the World, which hits shelves on August 27 and is available for pre-order now, as well as pieces by other experts in the burgeoning field of existential risk. But we’re not helpless. It’s up to us to postpone the apocalypse.

IfIf you suspect the world is falling apart, you’re not alone. In a 2012 poll by Reuters covering more than 20 countries, 15% of respondents predicted that the world would end in their lifetimes. A 2015 survey of Americans, British, Canadians, and Australians found that a majority rated the risk of our way of life ending within the next one hundred years at 50% or greater, while a quarter believed humanity had a better than even chance of being wiped out altogether over that time frame. In 2018, a UN scientific panel reported that the world had just 12 years to sharply reduce carbon emissions or risk a global catastrophe. Young people seriously question whether they should have children, fearing that bringing new human beings into a world on the brink of total catastrophe would be a moral obscenity.

What’s ironic is that this existential panic unfolds against the backdrop of a world that — for most of humanity — is better than it has ever been. In 2018, for the first time in history, more than half of the world’s population qualified as “middle class” or “rich.” Infant mortality has fallen by more than half over the past 25 years. Even as weapons have grown far more lethal, the global death rate from conflict is less than it was six hundred years ago, when we still largely fought with swords and spears. And if numbers like that seem too dry, ask yourself this question: Would I have preferred to be born 50 years ago, not long after a global war killed more than 60 million people? One hundred years ago, before the age of antibiotics, when a simple infection could be end your life? One thousand years ago, when human life expectancy was about 30 years? I doubt it.

Our failure to understand that the future could be radically different than the past is above all else a failure of human psychology. And that failure could prove fatal for our species.

If we don’t appreciate the present, it’s in part because we don’t fully understand the past — even as we make the mistake of assuming the future will be like the present. Psychologists have a name for this trait: the availability heuristic, the human tendency to be overly influenced by what feels most visible and salient in our recent experience. The availability heuristic can cause us to overreact, as when we hear about reports of a suicide bombing and become fixated on the danger from terrorists, ignoring the longer-term data that shows such incidents are on the decline. Risks that are most available to the mind are the ones that become top of mind, which is why so much of our regulation is driven by crisis, rather than by reason. The myopia of the availability heuristic leaves us fixated on everything that seems to be going wrong today, and blind to how far we’ve come.

But that same psychological bias can also lead us to underreact to the far greater dangers and threats of the sort that we’ve never experienced. The internet may remember everything but human memory is short and spotty. Few of us have experienced, in our own lives, catastrophes truly worthy of the name, and no human has seen an asteroid on a collision course with our planet, or witnessed a disease rise and threaten our very existence. These threats have no availability to us, so we treat them as unreal — even if science and statistics tell us otherwise. Our failure to understand that the future could be radically different than the past is above all else a failure of human psychology. And that failure could prove fatal for our species.

We think we know how bad it can get, but the worst catastrophes that have ever befallen the human race — two world wars, the Black Death that killed as many as 200 million people in the 14th century, the biggest hurricanes and most devastating earthquakes — are mere speed bumps compared to what we face now. These risks are darker than the darkest days humanity has ever known. They’re called existential risks, risks capable of putting an end to the existence of humankind, both now and in the days to come. They are the pitfalls we can’t come back from, the ones that could halt the human story in mid-sentence.

The death of the dinosaurs some 66 million years ago, thanks largely to the impact of a six mile-wide asteroid, was a mass extinction event; 99.9% of the species that have ever lived on the Earth have gone extinct. Some evolved into new species, but most, including every other Homo species we’ve shared the planet with, simply died out. And the same fate could befall us.

But if the universe has always wanted to kill us, at least a little bit, what’s new is the possibility that we might destroy ourselves, whether by error or intention. Man-made or anthropogenic existential risks were born with the successful test of the first nuclear weapon at Trinity Site in New Mexico on the morning of July 16, 1945. The bomb gave us the power to do to ourselves what natural selection had done to most other species before us.

“I regard it as almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years.”

Nuclear war, though, is just the first man-made existential risk, one that has grown no less lethal even as it has receded from our attention. With every passing year, billions upon billions of tons of man-made greenhouse gas emissions are added to the atmosphere, adding to man-made climate change. Given enough time — along with some bad luck — global warming could plausibly begin to threaten our existence. Even more frightening — and far harder to predict or control — are the existential risks arising from new technologies like synthetic biology or artificial intelligence, technologies that could create threats we can hardly imagine, bombs that could explode before we even know they’re armed.

How much danger are we in? The Canadian philosopher John Leslie, who helped invent the field of existential risk studies with his 1996 book, The End of the World, gave a 30% chance that humans would go extinct over the next five centuries. In his final published remarks, the late Stephen Hawking put our species on an extinction clock, writing: “One way or another, I regard it as almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years.”

At a 2008 symposium put on by Oxford University’s Future of Humanity Institute (FHI) — one of a collection of academic groups that have been formed to study existential risk — a group of experts collectively put the overall chances of human extinction before the year 2100 at 19%. That may leave us with a better than four in five chance of making it to the 22nd century, but as the existential risk expert Phil Torres points out, even a 19% chance of human extinction over the next century means that the average American would be at least 1,500 times more likely to die in an end times catastrophe than they would in a plane crash.

In a 2003 book, Martin Rees, Britain’s Astronomer Royal and the co-founder of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, put the odds of humanity successfully making it through the century on par with a coin flip — 50/50. “Just a few people or even an individual can, by error or by design, cause a catastrophe that cascades very widely, even globally,” Rees told me when we spoke in 2018. “I like to say the global village will have its village idiots.” And now those village idiots are armed and dangerous.

Rees is right to focus on the existential risk that emerging technologies will super-empower individuals and small groups that may harbor apocalyptic intentions. We’re as vulnerable to planetary disasters like asteroids and supervolcanoes that have wiped out life on Earth before as we ever were. But the very fact that Homo sapiens has survived for hundreds of thousands of years means that we can hope our run of luck will continue through the next century, and even longer. By one estimate, the probability of human extinction from a natural catastrophe over the next century is almost certainly lower than 0.15% — tiny, though not zero. And we have something the dinosaurs and other long extinct species lacked — scientists and engineers who can defend us from the dangers above and below, provided we give them the resources they need.

But the same brains that could protect us from natural existential risks have introduced entirely new ones into the world, technological risks far greater than anything this planet could throw at us. We’re only beginning to understand how these technologies might be used, and how they might be abused. What sets them apart from existing man-made threats like nuclear weapons is that they come not just with risks, but with benefits. Synthetic biology offers us the potential to create immortal organs, powerful drugs and crops that could keep a growing and warming planet fed. Artificial intelligence may be the most important invention in human history — and possibly the last one we’ll ever need. These technologies are “dual use” — the same science can be used for good, including to counter other existential risks, and for ill. We may not be able to tell which is which until it’s too late. There are no easy answers when it comes to the end of the world. And the end of the world matters more than we can begin to imagine.

Extinction, as the environmental slogan goes, is forever. The true horror of the end times is measured not by our own deaths, and the deaths of everyone we know and love, not just by the deaths of our children and grandchildren, but by the nullification of all who would come after them, all those who would live and love and carry this species forward. An existential risk realized is the death of the future.

Basic morality calls us to do what we can to save a single life. If the world were in immediate existential peril — if, for example, a very large asteroid were bearing down on the Earth — we’d do whatever we could, spend whatever we had, to try to save the billions of people who live here. By that same token, shouldn’t we be even more motivated, even more desperate, to protect the future generations who would take their turn on Earth — provided we don’t screw it all up now? If we include the future — all of the future — the stakes of what we do or don’t do in the present moment become unimaginably enormous. We could have thousands, tens of thousands, even millions of years of civilization ahead of us, but it depends on those of us alive in this moment.

To save ourselves we need to think about the unthinkable, and not merely understand the future but feel its gravity. Our greatest existential challenge isn’t technical or political, but conceptual. We have to believe that the end of the world can happen, even as we believe that we can do something about it. And we can.

Journalist, author, dad. Former TIME magazine editor and foreign correspondent. Author of END TIMES, a book about existential risk and the end of the world.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store