Member-only story
General Intelligence
Tweaking a Tiny Pixel on Your Selfies Can Defeat Facial Recognition
And other A.I. research papers of the week

If this week taught us anything, it’s that the brilliant shine around A.I. may finally be starting to fade. Investors are starting to realize that the reality of running an A.I. business is more than “train an algorithm, scale infinitely, profit.”
VCs at Andreessen Horowitz wrote this week that A.I. startups could be tougher than typical software investments: In addition to huge investments into cloud computing that many tech startups must make, A.I startups have the additional, hidden costs of labelling data, or the human task of looking at a picture, determining it’s a picture of a bird, and typing the word “bird.” Oh yeah, and there’s the privacy concerns too — see Alexa, Google Assistant, etc.
And even when the initial algorithm is off the ground, exporting it to make it work in other locations or countries means labelling all over again in other languages.
Maybe this means the end of free money for anyone with “.ai” in their URL.
That said, there’s still plenty of money in algorithms. I’d be remiss if I didn’t invite you to check out my new investigation into NEC, the Japanese tech giant that quietly built itself into a facial recognition powerhouse. The company has close relationships with police across the world, with more than 1,000 biometrics “public safety” contracts. If you want to know about the company that’s more likely to sell your local PD facial recognition than Amazon, check it out.
Now, here are some of the most interesting A.I. papers from this week:
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
To guard against facial recognition like Clearview AI, researchers describe a way to “cloak” selfies by making slight, imperceptible changes to the images. These changes are enough to throw off facial recognition algorithms, but not enough to be noticed by the human eye.