Listen to this story

--:--

--:--

Computer-Generated Faces Are Getting Real

Credit: Liyao Xie/Getty Images

TThey look like you or me: Off-white teeth, unkempt hair. Worry lines, wrinkles, unfashionable glasses. Overbites. They have familiar smiles, too, — the kinds you find on passports and office ID cards. Intellectually, it’s possible to accept the fact that none of these people exist, that what you’re seeing when you look at these headshots is a minor but notable advance in machine-learning technology courtesy of a graphics company called NVIDIA. But the old paradigm — that a person in a picture is someone who exists or at least did at one point — is tough to abandon. You have to work to dislodge it.

As recently as a few years ago, most A.I.-generated headshots looked like poorly stenciled police sketches, but things have changed. NVIDIA’s generative adversarial network — called StyleGAN — uses a deep learning technique called “style transfer,” which disentangles high-level attributes (pores, hair style, face shape, eyeglasses) from stochastic variation (freckles, hair, stubble, pores). “Lifelike” seems inadequate to describe the results: These completely made-up non-people are often indistinguishable from the real thing.

The natural response is awe — followed quickly by premonitions of catastrophe. It is not remotely difficult to envision how these GANs might be abused.

When NVIDIA published its StyleGAN paper late last year, concerns about its potential applications were tempered by its costs. To generate its faces, the NVIDIA team needed eight prohibitively expensive graphics processors and a full week of A.I. training. Recently, though, they posted StyleGAN’s code to GitHub, and last month, a software developer named Philip Wang created the website thispersondoesnotexist.com, which uses StyleGAN to present a (literally) new face with every browser refresh. As Wang told Motherboard, “Most people do not understand how good AIs will be at synthesizing images in the future.”

NVIDIA declined to comment for this piece.

We have “maybe one or two years” before the kinds of realistic fakes generated by NVIDIA start persuasively moving and talking.

It’s been a little over a year since deepfakes exploded into mainstream consciousness and sent ethicists and op-ed writers into a panic. That panic has since subsided into grim resignation: While projected death dates vary, many experts — though not all — agree that our sense of a shared, verifiable reality may be on its way to obsolescence.

Hao Li, director of the Vision and Graphics Lab at University of Southern California, tells OneZero he gives it “maybe one or two years” before the kinds of realistic fakes generated by NVIDIA start persuasively moving and talking. Eventually, he says, these tools will be “accessible to anyone — and who knows what purpose they’ll put them to.”

Besides the fact that the stock-photo model industry’s days may be numbered — sorry, Technology Review stock hipster — the mass proliferation of unsourcable but utterly convincing avatars will deepen the crisis in online dating, where the tech illiterate are routinely drained of savings by savvy scammers who use fake profile images. But what’s more concerning than a boost in catfishing is the possibility that StyleGAN could also hinder bot-detection efforts.

Aviv Ovadya, the Thoughtful Technology Project founder whose work focuses on understanding and mitigating the “misinformation ecosystem,” tells OneZero that “if you’re trying to identify whether some accounts are bots, one of the common strategies is to reverse-image search [their avatars] because it’s usually an image that’s been used by someone else.” But StyleGAN effectively eliminates this option.

A recent micro-controversy illustrates some of the perils. Late last month, Twitter user ElleOhHell — a semi-popular joke writer — revealed that he was a man, though his profile presented a female user. He’d spent the preceding half-decade posing as the woman in his profile pictures, who turned out to be his wife. She was in on it, he says, so he could use her pictures without fear of a callout or a damning reverse-image search — maybe the only things stopping people with far worse intentions than ElleOhHell from masking their identity or fooling people with constructed ones.

“Do we feel differently about it when it’s a white man creating a character that is a black woman or a Latina woman?”

No one seems to have been harmed by ElleOhHell’s Twitter deceptions. Yet even with mostly benign intentions, things can get dicey. Victoria Schwartz, a law professor at Pepperdine University who researches virtual influencers like the computer-generated Instagram celebrity Lil’ Mikaela, brings up the sadly current issue of blackface.

“Do we feel differently about it when it’s a white man creating a character that is a black woman or a Latina woman?” she tells OneZero. “Ethically, we start to have some concerns.”

And as Ramesh Srinivasan, director of the University of California at Los Angeles’ Digital Culture Lab, points out, this technology could entrench a culturally narrow idea of what a person is supposed to look like. “What’s considered human is always going to represent the biases of the world outside that technology,” Srinivasan noted.

Not every critic is ready to ring the alarm bells, however. “There’s no doubt that fake content can be weaponized, and we’re seeing that with deepfakes,” says Eric Goldman, co-director of the High Tech Institute at Santa Clara University. “But what’s the difference between a realistic fake and a fictional character in a movie?”

For much of the last decade, Mark Zuckerberg and company have worked tirelessly to de-fictionalize the internet: the insistence on real names, the retroactively disastrous integration with virtually every website and app, and increasingly pervasive and precise facial recognition. The powers that be have won that battle. Very few people at this point view their social media profiles as anything other than an extension of their actual self. An easy-to-use, accessible spin on StyleGAN might lead to some interesting conceptual art projects (mock yearbooks filled with people who don’t exist or elaborate, interconnected social media webs of unreal friends, families, and pets), but it probably wouldn’t lead the average Facebook user to toy with a new identity.

But the world of persuasively realistic moving and talking fakes that USC’s Li says are a year or two away — where someone could inhabit an infinite number of GAN-generated bodysuits — could conceivably loosen things up a bit. Maybe it would bring back some of the polymorphic, identity-dissolving spirit of the internet-that-was-and-could-have-been, the one where no one knows you’re a dog. This was part of the promise and appeal of Second Life and virtual reality: the chance to be a different person or persons to ping between personas and expand your sense of the possible.

Last month, NVIDIA’s head of machine learning Anima Anandkumar came down on one side of a heated debate in the world of A.I. The dispute was incited by the decision of OpenAI, Elon Musk’s A.I. nonprofit, to hold back an innovative text-generation tool it had developed, citing concerns that bad actors might use it to automate fake news.

One side of this debate holds, roughly, that all the research be released because it’s ultimately good for the advancement of global well-being. The other side says let’s take a second and make sure we’re not inadvertently empowering malevolent actors or hastening a digital apocalypse. Anandkumar is on the former side, albeit with a reasonable-seeming defense: Withholding research results could, she told the Verge, “put academics at a disadvantage.”

Will StyleGAN, adapted and expanded, put thousands of stock-photo models on welfare? Abet the spread of fake news about Hillary Clinton’s toddler brothel? Break the hearts of scores of credulous widowers and shut-ins? We can’t know. The problem, according to Ovadya, is that NVIDIA doesn’t know and doesn’t seem compelled to find out either.

“In an ideal world,” he says, “there would be some connection between the people who could be harmed by this research and the people who are moving this research forward.” He added that he doesn’t think we have “the structures in place, as a society and a research community, to facilitate that conversation” or to make an “informed decision about what to release and what not to release.”

It’s left to us — those poor souls who not only look real but also, tragically, are real — to sort out the consequences.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store