Facebook Creates ‘Radioactive Images,’ and Other A.I. News From This Week
Plus an algorithm that lets drones automatically detect protests and religious celebrations.
Researchers publish hundreds of new A.I. research papers every week on arXiv, an online repository for scientific papers. Here are a few that we found interesting, and think you might, too.
Facebook researchers created a way to generate an invisible “radioactive” mark on images in a dataset, which gives future researchers a way to trace which data was used to train a specific model. That could be useful when certain companies *cough* Clearview AI *cough* claim to have a facial recognition database scraped from images on the web. More from Facebook here.
Deep learning takes a lot of computing power, especially when working with images. Intel researchers built an objectness mask generation (OMG) network to help predict where an object is likely to be, so compute power can be focused on a specific region of pixels rather than the whole image. And that is a LOL (Latency Optimizing Labor).
“Our goal is to correctly associate a set of messages posted in a small radius around a given location with their corresponding location type, e.g., school, church, restaurant or museum.” This could be used by a company like Google to better tag its maps, or as the authors suggest, identifying illegal black-market day care facilities.
Drones get a bad rap, say the researchers at IEEE. So in addition to building a new algorithm to detect post-flood and disaster situations to supercharge autonomous drones, they also decided to include the ability to detect protests or religious celebrations. So much for #TechWontBuildIt.
The world is in critical need for an algorithm that can count the amount of people in a crowd, and this international effort has luckily furthered this crucial authoritarian technology.