An Eerie Historical Deepfake Imagines Nixon Telling the World the Moon Landing Failed

A team of scientists used A.I. to create a convincing facsimile of a historical speech that never happened, and put the threat of fake information front and center

Credit: Halsey Burgund/YouTube

EEven from a distance, you recognize the voice: pugnacious, portentous, with that famous bulldog growl. It’s Nixon, without a doubt. Then you see the TV screen showing the most reviled president of the 20th century at his desk in the Oval Office, flanked by flags, giving an address to the nation.

But it isn’t quite right; Nixon seems to be saying something about dead astronauts. “These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” he intones gravely. “But they also know that there is hope for mankind in their sacrifice.” Double take doesn’t quite do it: even for Tricky Dicky, this is a turn-up for the books.

The footage, of course, is faked — more accurately, deepfaked. The speech is authentic, in a way: it was written for Nixon 40 years ago by one of his speechwriters, William Safire, in the eventuality that the first moon landing failed and the Apollo 11 landing crew was lost. As all of us know, the words were never delivered. Armstrong and Aldrin landed and made it safely back to Earth; indeed, Nixon was there to greet them after splashdown, on board a U.S. Navy ship in the middle of the Pacific.

The finished thing is not only eerily convincing; it aims to make us reconsider what we think of as historical truth.

The video footage of Nixon reading the speech is a hoax cooked up by creatives at MIT’s Center for Advanced Virtuality, who recently put it on display at the IDFA documentary festival in Amsterdam. Entitled “In Event of Moon Disaster,” it’s a tour de force of high-tech fakery. Working alongside technicians in Israel and Ukraine, the MIT team used advanced machine-learning techniques to create new audio of Nixon speaking, and layered it on top of edited video footage. The finished thing is not only eerily convincing; it aims to make us reconsider what we think of as historical truth.

When we spoke a few days before the opening, Francesca Panetta, the newly installed creative director at the Center for Advanced Virtuality, explained the thinking behind the project. Deepfakes suddenly seem to be everywhere, and have become a source of both amusement and anxiety — whether it’s Trump making a surprise cameo in Breaking Bad or videos that splice the faces of Hollywood actresses onto the bodies of porn stars (more than 90% of deepfakes on the web are thought to involve porn). “Deepfake” now even appears to be a verb.

But, while deepfake makers have sometimes delved into history, says Panetta, “there haven’t been many projects that try to explore what it means to alter historical footage in a convincing way. And we know that people are much more likely to believe something if it’s video. Seeing is believing, right?”

Not that making good deepfakes is straightforward. First Panetta and her co-director Halsey Burgund edited the original source footage — which actually came from Nixon’s infamous resignation speech in August 1974 — down to a two-minute segment. Then they hired the actor Lewis D. Wheeler to read out the full speech and listen to thousands of tiny clips of Nixon, before imitating their intonation. Altogether it took a grueling week in the studio.

This produced a “training set,” which the Ukrainian firm Respeecher transformed using A.I.-based “synthetic voice production,” that is, Wheeler’s intonation and speech rhythms re-spoken in an imitation of Nixon’s voice. Finally, the fresh audio and a video of Wheeler reading out the speech was sent to the Israeli company Canny AI, who mapped the sound onto the original footage and used another machine-learning system to alter Nixon’s mouth movements so everything matched up, so-called video dialogue replacement.

“To make a deepfake convincing is actually really hard,” Panetta says. “In a way, that was almost a relief.” Even a few days before the opening in Amsterdam, she and the Dutch crew were still battling to synchronize Nixon’s mouth with the audio, the toughest bit of the process.

To aid the illusion, Panetta and Burgund decided to make a life size replica of a 1960s living room, with the film playing out on a vintage TV set (“it’s probably the second time this TV has screened Apollo 11,” jokes one of the crew). When you walk in, it feels like stepping back in time, down to the vintage coffee table and the migraine-provoking wallpaper.

It’s one thing to play a video like this in a documentary festival, where the context is clear; given that there are plans to put it online next year, does Panetta worry that people might actually be fooled? “We’re very aware that we don’t want to be creating a piece of misinformation,” she replies. “We’re doing everything we can to prevent it: blockchain, watermarking.”

However or wherever you see it, “In Event of Moon Disaster” obviously spotlights issues of trust: even though you know this particular video is fake — is Nixon’s voice slightly digitized? do his head movements look natural? — it takes a moment to adjust.

But the project also poses broader questions about how we understand and process historical sources, especially where they look plausible. For all the endless conspiracy theories surrounding the moon landing, footage of a speech mourning the Apollo 11 astronauts wouldn’t fool many people (Aldrin is very much alive, for one thing, and as bullishly opinionated as ever.)

The risk comes not so much when a fake is convincing as when it’s dangerously close to the truth, or gives us something we yearn to believe is true, and when we’re searching for the final crumb of evidence.

Professor Tim Hitchcock, who teaches digital history at the University of Sussex, points to cases such as the so-called Cottingley Fairies — a notorious hoax from the 1910, in which two English children managed to convince the watching world that they had taken photographs of fairies in the wild. The scam was unbelievably low-tech, even for the time — the girls used their father’s camera and posed with illustrations cut out from picture books — but it nonetheless created an international sensation. Even Sir Arthur Conan Doyle, the inventor of master sleuth Sherlock Holmes, was one of those duped.

“Deepfakes, like all hoaxes, are Rorschach tests — they reveal what we’re obsessed by at a particular moment.”

“Hoaxes are nothing new,” Hitchcock says. “But they seem to crop up with new kinds of technology. Think of the Lumière brothers showing their film of a train in 1895, and stories that the audience stampeded because they thought it was real. It’s as if we’re trying to adjust to the possibilities each time.”

Tim Hwang, who researches A.I. and ethics, agrees. “Deepfakes, like all hoaxes, are Rorschach tests,” he says. “They reveal what we’re obsessed by at a particular moment.”

So should we really be concerned, and — if so — what to do about them? Hwang suggests technology itself might hold the answer: software-based “deepfake detectors” are advancing nearly as rapidly as deepfakes, and Google and other companies, newly anxious about their reputation for trustworthiness, are only to keen to invest. Given how difficult it is to make deepfake video look convincing (at least so far), most humans already do a decent job of spotting when video isn’t real: slow blink rates, facial discoloration, and out-of-sync mouth movements are giveaways.

Governmental legislation is another possibility, though Hwang says that it would be hard to frame: “How do you distinguish it from other kinds of manipulation, such as Photoshop or special effects in movies? I’m not sure we need the Great Deepfake Bill of 2020, put it like that.”

Hitchcock, while acknowledging the threat posed by deepfakes, isn’t convinced by the recent moral panic they’ve provoked. He points, again, to the Lumière brothers. “Only a few years after the birth of cinema, people were watching movies with no problems, and had grown used to sorting fictional stories from, say, documentaries. As a culture, we do adjust.”

He adds that for historians, it’s not as if there’s any such thing as an objective source, anyway: “You’re sifting things continually, trying to work out what evidence to trust. That’s a really important part of what historians do.”

In reality, suggests Hwang, the best solution might be to step back, and ask why we’re believing these things in the first place — if believing them we genuinely are. “Rather than waiting for a machine to tell us what’s true or not true, we need to trust ourselves,” he says. “Computers can’t help us with that.”

A critic and journalist based in London, Andrew covers culture for the New York Times, the Guardian, the Financial Times and the New Yorker. andrewjdickson.com

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.