When I was 21 years old, I experienced what might be termed a minor break with reality. I was fresh out of a three-year relationship—one that had been defined by my partner’s serial dishonesty. In the months following the breakup, I found myself questioning some of the things my ex had told me over the years, chipping away at the foundations of my understanding of him.
Major life moments began to seem suspect. A few months into our relationship, my partner told me that one of my friends said I wasn’t fitting in well at college, that the people I’d been hanging out with weren’t really all that fond of me. I felt unmoored by that remark, abandoned and vulnerable enough that when, immediately after, my partner suggested we move in together, I didn’t have the presence of mind to consider all the reasons it might not be a good idea. Now I had to contend with the possibility that my friend never said that; that the nasty comments people had been making about me were nothing more than another invention sold to me by my ex.
Offhand comments he’d made over the years were similarly suspect. In retrospect, it seemed unlikely that his uncle’s wife’s family actually owned Chicago’s famous Crain Communications Building. Details I had taken for granted about his friends and family and past relationships began to unravel before my eyes.
It would have been one thing if this sense of uncertainty had remained contained in my past relationship, but it began to spill into the rest of my life. I became vigilant against potential liars, and that vigilance became paranoia. If someone expressed affection for me, how could I know that they really meant it, that their professed feelings were genuine and not just a cruel attempt to manipulate me? How could I regain confidence in my judgment, in my perception, in my very reality?
Fifteen years after that breakup, I find myself thinking about what it means to lose the ability to distinguish between reality and mendacity, to arrive at what so many have called “the end of truth.” As new technology amplifies our ability to create and disseminate deception, many people have begun to fret that our connection to an objective truth is rapidly beginning to fray. They worry that the combination of social media and effortless video and photo editing will soon launch us into a dystopian space where we can no longer trust our own assessment of what’s real and what’s not.
The 2016 election made “fake news” a household term, and at the end of 2017, a so-called deepfake porn video purporting to feature Wonder Woman star Gal Gadot demonstrated just how easy it had become to manipulate a video and map one person’s face onto someone else’s body. In April 2018, BuzzFeed released a video of Barack Obama appearing to warn against the dangers of fake news, only to reveal that it was just a clip of the former president being wielded like a puppet by the actor Jordan Peele.
The easier it gets to manipulate video, the more critics begin to fret that it won’t be long until forged videos become indistinguishable from real ones, sowing distrust and creating alternate realities that further detach us from any sense of objective truth. It’s not just paranoid futurists who are sounding the alarm: During a Senate Intelligence Committee hearing this month on worldwide threats, deepfakes were raised as a potential tool of global terrorism. “The barrier to entry for deepfakes technology is so low now, lots of entities short of nation-state actors are going to be able to produce this material and, again, destabilize not just American public trust but markets very rapidly,” Republican Senator Ben Sasse told the Washington Examiner.
We’re not just intrigued by the technological wizardry on display in a deepfake; we’re convinced it has the power to wholly transform our ability to separate fact from fiction.
There are certainly reasons to be afraid of the power of manipulated video. Women — even those who are not celebrities — have already had their likenesses put into porn videos against their will as a new form of online harassment. An altered animation depicting Parkland survivor Emma Gonzalez tearing up the Constitution has been used to discredit her.
But as someone who once stood on the brink of the abyss, convinced that the truth had become unknowable, I’m not convinced that our future is really so bleak. However scary and new the latest assault on truth might feel, it’s really just one more chapter in the long conflict between fact and fiction. Though there are always hoaxes that successfully manage to sow discord, in the long run, humans have proven ourselves eminently capable of finding our way back to the truth.
It’s easy to think that we’re living through uniquely perilous times for the truth. In a recent episode of the New York Times podcast Still Processing, hosts Jenna Wortham and Wesley Morris pegged the beginning of the seeming fragmentation of our consensus on reality to Donald Trump’s embrace of birtherism back in 2011. Wortham and Morris are hardly the only ones who see the Trump era as uniquely detached from the truth: Michiko Kakutani’s 2018 book, The Death of Truth: Notes on Falsehood in the Age of Trump, makes a similar argument, as have former government employees and an unending number of commentators.
Yet even the most cursory glance at history shows that the assault on truth didn’t begin with Donald Trump or, for that matter, the internet. Newspaper magnate William Randolph Hearst was one of the first to blur the lines between fact and fiction, sacrificing journalistic credibility in favor of pumping up his profits. When tensions between Spain and Cuba did not erupt into the kind of violence that would get people buying newspapers, Hearst encouraged his staff to manufacture stories and even attributed the unexplained explosion of the battleship USS Maine to a Spanish torpedo — an assertion that ultimately launched the Spanish-American War.
You don’t need a high-tech hoax to manipulate someone who already wants to believe something.
History has never been solid. It was repeatedly rewritten throughout the 20th century, and authoritarian regimes in China and Russia still update and alter narratives of the past to fit the needs of present rulers.
Over and over, new technology has played a part in promoting hoaxes and shaking our faith in our own perceptions. But once technology becomes commonplace, the fakes become easier to spot. It has long been possible to radically — and convincingly — alter a photograph, but most of us have simply adapted to that reality, learning to question images that seem suspect rather than wholly giving up on photography as potential arbiter of the truth.
A century ago, two girls from a small English village called Cottingley were launched to international fame after claiming to have produced photographic evidence of fairies. Not everyone bought into their story, but many people did — including, most notably, Sherlock Holmes creator and arch-rationalist Arthur Conan Doyle, who even wrote a book defending the photographs’ credibility.
Looking at the photographs with a modern eye, it’s easy to see them for exactly what they are: photographs of girls posed next to paper dolls. Yet for the Cottingley believers, the photos’ fraudulent nature was more difficult to discern — not because of some inability to distinguish fact from fiction, but because the newness of photography lent the medium a near-magical quality. Unused to photographs, even rational people could believe that, in addition to documenting the visible world, cameras might be able to reveal things that are invisible to the naked eye. If X-rays could show the skeletons beneath our skin, why couldn’t cameras give us a glimpse of a hidden world of fairies and gnomes? But the magic of a new medium always fades.
In the modern era, it’s the ability to hoax itself that’s presumed to have unlimited powers. We’re not just intrigued by the technological wizardry on display in a deepfake; we’re convinced it has the power to wholly transform our ability to separate fact from fiction.
For Conan Doyle, specifically, there was an added incentive to believe that these photos were legitimate. “His father was in an insane asylum, drawing pictures of fairies,” Mary Losure, author of The Fairy Ring, or Elsie and Frances Fool the World, tells OneZero. “If fairies were real, his father wasn’t crazy,”
And Conan Doyle was blinded by his own biases toward the girls behind the Cottlingley hoax. “One of the reasons [Conan Doyle believed] they had to be real was because the children were only little village girls, and they weren’t smart enough to fake them,” Losure explains.“They couldn’t fool him. Elsie, he thought, was not a good enough artist. He took all this baggage with him when he looked at the photos.”
You don’t need a high-tech hoax to manipulate someone who is primed to believe something. In late January, a video surfaced of teenage attendees of the March for Life in Washington harassing an indigenous war veteran. The initial response to the video was swift condemnation of the teens — until an essay on Reason asked readers to discount the racist cruelty they’d seen on display and consider that the teens might be the actual victims in the scenario.
Suddenly the internet split along predictable lines, with the right defending the boys while the left took them to task; two wildly different interpretations of the exact same video, based not on the visual evidence, but on what people wanted to believe.
“I think people conflate two things: the end of truth and the end of good faith,” Rose Eveleth, creator of the futurist podcast Flash Forward, tells OneZero. The end of truth, Eveleth points out, would be a “topsy-turvy, Phantom Tollbooth world where nothing is real,” a parallel universe we aren’t in danger of realizing. “The end of being able to assume that the person you’re engaging with is engaging in good faith,” on the other hand, is a more accurate description of what many of us fear.
The paranoia around deepfakes has also caused many of us to miss the potential positive uses of the underlying technology, which can track and manipulate images of the human face. Louis-Philippe Morency, an associate professor of artificial intelligence at Carnegie Mellon University whose research has helped develop the facial analysis tools that underpin deepfakes, points out that the technology can do far more than just create fake news. The breakthroughs that have enabled deepfakes have also improved facial analysis technology, creating programs that are better able to analyze and identify human emotions. And that technology is being tested for a purpose that seems wholly counter to the bad-faith endeavors of fake news and faux revenge porn: therapy. “The original application, at least from our perspective… is to help people with trauma,” Morency tells OneZero.
“I think people conflate two things: the end of truth and the end of good faith.”
Consider therapy in virtual reality. If a patient is struggling with trauma but not ready to see a doctor in person, the technology could allow them to mask their face and voice, permitting an anonymity that could be shed when they’re finally comfortable enough to show their face.
Even in anonymity, Morency notes, “you want to keep the gaze information, the facial expressions, the head motion.” That data can help a therapist better understand their client. “There’s value in this.”
But it’s value that our paranoia about the collapse of good faith is preventing us from seeing.
“We will always have the option of listening to our gut instinct and our intuition,” says Jenn Brandel, a licensed independent clinical social worker. “Ninety percent of the time, that’s going to serve us really well.”
When Brandel works with a patient who is struggling with a severe fear of being misled — someone who, like me, has suffered a traumatic experience with deception that has left them paranoid and distrustful of others — she does not advise them to withdraw from human contact and abandon all hope of trusting someone again. Instead, Brandel says that we should “accept that there’s risk.”
“When we trust someone or something that may not be trustworthy, it might result in an outcome where we get hurt, or it may cost us something,” Brandel explains. “The key to not being so afraid of that is feeling really solid in ourselves and knowing that no matter what happens, we can recover from it and we can be okay.”
There will always be fake media and bad actors and people looking to sow discord and dissent. Technology didn’t create bad faith. And there will never be a 100 percent, fully foolproof way to guarantee that we won’t get fooled again. But if we can educate ourselves about those risks, we can become media literate and question stories that feel wrong.We can adapt to new technology as we go.
There are signs that this adaptation is already underway. Despite threats of an incoming avalanche of fake news, studies have shown that the vast majority of the public does not share fake media — and those who do tend to be older and less digitally savvy. Even though deepfakes have been billed as impossible to distinguish from unmodified video, researchers are already finding ways to detect manipulated faces. Neural networks that are trained to track and compare the facial movements in both modified and unmodified videos have shown great success in identifying the fakes — but, notably, studies have shown that even untrained human observers are able to spot altered videos the majority of the time.
Some of us will always end up getting lost in the blur between fact and fiction. But eventually, society always manages to find a way to see past the hoaxes. Fifteen years after my understanding of the world came crashing down around me, I know, on a deep level, that it’s harder to get fooled than many of us think. The truth is out there, so long as we care enough to look for it.