Listen to this story
Three years ago, the American Museum of Natural History in New York hosted a debate that has since been watched 2.1 million times on YouTube. Four physicists and a philosopher, prodded by moderator Neil deGrasse Tyson, spent two hours discussing whether reality as we know it might be a simulation.
If it is, we’re all characters in a computer-generated world that is so dense with data, so abundant in processing power, that it gives rise to our very consciousness. It wouldn’t necessarily mean we’re being manipulated by a master simulator. That entity might have established the laws of nature as we perceive them and then hit “go” to see what would evolve. Regardless of the specifics, it would mean our universe is being or was originally rendered from some other plane — from the actual “base reality.”
This hypothesis wasn’t new in 2016, but something about that New York debate was compelling, and it was covered in several science publications. Geek high priests like Elon Musk have since claimed that we’re very likely to be in a simulation, bestowing a sheen on the idea for its enthusiasts and potential converts. Meanwhile, even if you find the simulation idea ridiculous — I probably should disclose that I’m in this camp — it can be stimulating to hear how someone concludes that the hypothesis is not merely conceivable, but actually plausible.
At the end of the night, Tyson asked each of the five panelists to blurt out “the likelihood that you think we are in a simulation.” The highest of the numbers came from the philosopher in the group, David Chalmers of NYU, who said, dryly, “42 percent” — a joke from The Hitchhiker’s Guide to the Galaxy. Max Tegmark, a physicist at MIT, said 17 percent, which is roughly the chance that you’ll get the specific number you want when you roll a die. Jim Gates, who is now at Brown University, said 1 percent. Lisa Randall of Harvard, who had earlier said she wondered why people consider the simulation hypothesis an interesting question, said the chances of it being correct were “effectively zero.” Unfazed, Tyson said he suspected that “the likelihood may be very high.”
The only panelist who wouldn’t give any odds was Zohreh Davoudi, a theoretical physicist now at the University of Maryland. “I can’t give you that number,” she said. “I don’t have any answers.” Her reticence was admirable, because more than everyone else on that stage, Davoudi’s research goes to the heart of whether the simulation hypothesis might be legit.
Davoudi specializes in a branch of physics that aims to fully understand nature by analyzing the smallest stuff. We know that the matter all around us has various properties, but why? Exactly how is it that the interactions of elementary particles in everything, governed by fundamental laws such as gravity and the “strong force” inside atoms, predict the behavior of galaxies and black holes and the chemical reactions that allow your brain to helm your life? One way to find out is by modeling these interactions in a computer.
That’s something Davoudi and her colleagues do. They use ultrapowerful software to simulate the behavior of fundamental particles in three-dimensional space. Because of the limitations of even the best supercomputers, they model extremely small spaces. We’re talking subatomic scales. But it’s a start. Better computers to come — including quantum computers that derive their power from the quirky interactions of fundamental particles — will be able to simulate bigger and bigger chunks.
In a paper from 2012, Davoudi and two colleagues at the University of Washington pointed out that their simulations have imperfections, which stem from constraints on the available computing power — and that such constraints will probably remain in much larger simulations as well. In particular, they wrote, it should be possible to detect “an underlying cubic lattice structure” that the programs used to model space and time.
It follows, then, that anyone interested in the simulation hypothesis should look for such imperfections in our universe. What signs might we see that the simulation is conserving computing resources?
Davoudi is perfectly happy to be so uncertain about the simulation hypothesis that she won’t even hazard a guess about the likelihood that it’s accurate.
Davoudi and her colleagues suggested that one such signature might be found in certain cosmic rays. These are extremely high-energy particles that zing through space. They’re hard to study, because they’re relatively rarely observed inside Earth’s atmosphere and have far more energy than scientists can create inside huge particle colliders. But if enough of them can be measured, it should be possible to calculate their origins. Are they propagating uniformly from all over, or do they come from some preferred directions? If it turns out to be the latter, Davoudi says, “that could be a sign that the underlying space-time has some sort of geometry” similar to ones in her own simulations. This might also help settle a longstanding debate in physics about whether space and time are continuous even at the smallest scales or made up of discrete, granular pieces — whether things are pixelated, you might say, to stay with the simulation analogy.
If cosmic rays do have the qualities Davoudi is proposing, they could tell us something new about the nature of space and time. But that wouldn’t necessarily mean this is a simulation. “Someone else might have a different explanation,” she says. “This would be just a starting point.”
Davoudi is perfectly happy to be so uncertain about the simulation hypothesis that she won’t even hazard a guess about the likelihood that it’s accurate. She definitely thinks there’s a chance. Having made progress herself in simulating a minuscule portion of the universe, she’s open to the possibility that a higher intelligence with vastly greater computing power could simulate the whole thing. That’s why she and her colleagues wrote their 2012 paper: because they thought the simulation hypothesis was worth taking seriously. At the very least, she says, it’s very possible that the universe itself is, at the level of fundamental particles that pop in and out of existence, doing something akin to what we think of as computation, even if no outside agent programmed it.
It could take decades, maybe centuries, to get enough data on cosmic rays or other aspects of the universe to have anything that might count as evidence for the simulation idea. In the meantime, Davoudi says it’s unwise to get attached to the hypothesis. “The danger,” she says, “is that we’ve seen in the history of science that sometimes philosophy, a philosophical argument, points very strongly to something and can be very intuitively correct, and then physics or science comes in a completely opposite direction.”
Just what is this philosophical argument Davoudi is talking about—the rhetorical pathway that leads even rationalists like her to develop an intuition that the simulation hypothesis is compelling? A lot of it comes from Nick Bostrom, a philosopher at Oxford.
Bostrom argues that future generations of humans — or some “post-human” species that follows us — will develop computers many times more powerful than any machine we can now envision creating. With those machines, it could be possible to simulate entire worlds in such fine-grained detail that the characters inside will have a richness of experience that adds up to consciousness. If future people run a lot of these simulations, there would be many more simulated minds in existence than nonsimulated ones. So, the argument goes, statistically speaking, we’re more likely than not to be in a simulation.
Baked into this are several enormous assumptions about how the future of computing will unfold and how intelligent beings of the future are likely to spend their time. Indeed, Bostrom offers an out for people like me who find this armchair argument unconvincing. As he wrote in 2003: “[I]f we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears.” I’m cool with not believing that.
Thinking of the universe as the inside of a video game or some other computer program is the latest in a long line of metaphors shaped by the dominant technology of a time.
Some people have nearly the opposite response. Bostrom told the New York Times in 2007 that his “gut feeling… is that there’s a 20 percent chance we’re living in a computer simulation.” (Though now he won’t say what he thinks the probability is.) But other people who follow his reasoning leap significantly higher, like Neil deGrasse Tyson. Elon Musk takes Bostrom’s starting proposition as a given — he asserts that video games eventually are going to get “indistinguishable from reality” — and builds on that claim to say that this is almost certainly a simulation. In fact, Musk says, you should hope it is, because the most likely thing that would prevent games of such crystalline perfection from being developed is the end of civilization itself.
One thing to consider about that stark method of reasoning is how old-fashioned it is. Thinking of the universe as the inside of a video game or some other computer program is the latest in a long line of metaphors shaped by the dominant technology of a time.
As Cambridge University computer scientist John Daugman noted in a 1993 essay, over the course of history, the brain and the soul have been compared to technological spectacles such as fountains, pumps, clocks, steam engines, hydraulic machines, and electrical circuits. By describing reality in terms of machinery people could picture, each of those formulations limited people’s imaginations. Daugman saw it happening again in the tendency to describe cognition in terms of computation. “We should remember that the enthusiastically embraced metaphors of each ‘new era’ can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent their liberation,” he wrote.
The pattern rolls on with the simulation hypothesis, Daugman recently told me. “In the current case,” he says, “virtual reality becomes the magical metaphor.”
To put stock in the simulation hypothesis without any evidence, you have to believe that this time, we’ve hit on an accurate description and not just a handy metaphor. But that makes it more religion than science, because the simulation hypothesis can’t be disproved. After all, any evidence that calls it into question will be dismissed as having been put there by the simulator.
If the simulation hypothesis is an overly convenient metaphor run amok, the explanations of the universe that are waiting to be uncovered are probably much stranger and more beautiful. Sylvester James Gates Jr. has an idea that hints at such an outcome.
On that debate stage in New York in 2016, Gates all but dismissed the simulation idea (1 percent chance). But something he said at the beginning of the session seemed to imply the opposite. He said he had found “error-correcting codes” in physics equations.
Error correction is a vital principle in communications. It’s how computers reliably relay information even in the presence of glitches that might physically change a bit from a 0 to 1 or vice versa. “Error-correcting codes are what make browsers work,” Gates told the audience. “So why were they in the equations that I was studying?” Gates isn’t alone. Other researchers have found similar attributes in equations for quantum mechanics. What’s up?
Gates was reluctant to talk about this, because he’s so tired of the simulation hypothesis. But when I got him on the phone, he was gregarious.
Physicists describe reality by writing equations to match what’s observed in nature: E=mc² and force=mass×acceleration are two common ones. Others are far more complicated. As Gates describes it, for theoretical physicists like him — who explore how things we don’t yet understand might work — an equation is almost like a musical composition. You can play variations on it. An improvising musician can speed up the tempo or play a piece in a different key; a physicist can vary an equation by calculating what would happen if, say, electrons had different properties than the ones we usually assume.
Gates is a scholar of “supersymmetry,” which is the conjecture that each fundamental particle has a corresponding particle whose existence would explain a lot of the weird things we detect in the universe. There isn’t, however, confirmation that those hidden particles exist. Anyway, Gates says that in explorations of supersymmetry, it’s useful to represent some things, like photons, as either zeros or ones. And if you do that, the equations work only if they have extra information in them: error-correcting sequences.
Gates thinks they’d be a sign that the universe… evolved. It’s possible that in some “inchoate epoch,” universes with a range of mathematical properties arose and couldn’t last. Ours did.
Remember, this shows up in equations that might describe reality, which is a big difference from them being observed in reality itself. But let’s assume that some future observation shows that these codes somehow do describe an aspect of the universe. There’s precedent for long delays between conjecture and proof: Einstein predicted gravitational waves in 1916, and they weren’t detected until 2015. If that turns out to be the case with error-correcting codes, why would they be there?
Gates thinks they’d be a sign that the universe… evolved. It’s possible that in some “inchoate epoch,” universes with a range of mathematical properties arose and couldn’t last. Ours did, because of the stability provided by a feedback mechanism like error correction. Feedback mechanisms keep biological systems going, too — scientists are still sorting out how error-correcting methods keep DNA in tune. It shouldn’t be surprising to see the universe itself having a way to propagate its own stability, if the universe is, in some sense, a manifestation of life merely by being something rather than nothing.
But wait a minute. In communications, error correction protects the integrity of information flow even when there’s noise — some random problem that makes a zero come out as a one. What would the noise be in the universe?
“There is one possible answer I know,” Gates says. “And it’s called the decay of the false vacuum.”
There is a fantastic explanation of the false vacuum on YouTube (7.5 million views!), but here’s a very short version: The equations derived in the 1960s that predicted the existence of a fundamental particle called the Higgs boson (which was then observed in 2012) also indicated “that it would be possible for the entire universe to disappear,” as Gates puts it. If that’s true, he adds, the presence of error-correcting codes in the equations for supersymmetry might indicate “that the universe underwent some process resulting in protection against that decay.”
Make of that what you will. It, too, is based on several assumptions. Perhaps the biggest is that it’s ultimately possible for us upright apes to fully wrap our minds around the essence of the universe.
Why do we expect that we will? It’s a belief, Gates says, that “has just worked out for our species for several hundred years. That’s all I can tell you. Whether it will continue to work in the future, I have no idea.”