Computer-Generated Faces Are Getting Real

Too real, maybe

Daniel Kolitz
OneZero

--

Credit: Liyao Xie/Getty Images

TThey look like you or me: Off-white teeth, unkempt hair. Worry lines, wrinkles, unfashionable glasses. Overbites. They have familiar smiles, too, — the kinds you find on passports and office ID cards. Intellectually, it’s possible to accept the fact that none of these people exist, that what you’re seeing when you look at these headshots is a minor but notable advance in machine-learning technology courtesy of a graphics company called NVIDIA. But the old paradigm — that a person in a picture is someone who exists or at least did at one point — is tough to abandon. You have to work to dislodge it.

As recently as a few years ago, most A.I.-generated headshots looked like poorly stenciled police sketches, but things have changed. NVIDIA’s generative adversarial network — called StyleGAN — uses a deep learning technique called “style transfer,” which disentangles high-level attributes (pores, hair style, face shape, eyeglasses) from stochastic variation (freckles, hair, stubble, pores). “Lifelike” seems inadequate to describe the results: These completely made-up non-people are often indistinguishable from the real thing.

The natural response is awe — followed quickly by premonitions of catastrophe. It is not remotely difficult to envision how these GANs might be abused.

--

--

Responses (8)