OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.
Modern artificial intelligence relies on algorithms processing millions of examples or images or text. A picture of a bird in an A.I. dataset would be manually tagged “bird” so that the algorithm associated aspects of that image with the category “bird.”
The process of tagging this data, by hand, scaled to the millions, is time-consuming and mind-numbingly monotonous.
When app stores and cloud hosting platforms banned Parler earlier this month after the self-described “free speech” social network failed to moderate calls for violence, they set a new precedent. Previously, the conventional wisdom was that developers bore the responsibility of policing an app’s community. After all, the developer is in the best position to know what its users need, what they’re up to, and how to build the specific moderation tools that work best for its community.
When a Facebook video of doctors sharing bad coronavirus information reached 20 million people last week, you could almost predict the reaction. The company’s critics seemed shocked, pointing out it had made little progress since a similar video — Plandemic — went viral in May. They put forth the standard demand for better content moderation. But by then it was too late.
The cycle where a tech company messes up, critics point it out, and then it happens again, has repeated itself for years. To end this frustrating dance, we need a new way of discussing these companies’ problems. We…