Facebook Insists No Security ‘Backdoor’ Is Planned for WhatsApp
The company is fighting back against rumors that it would scan messages on users’ phones prior to encryption
Billions of people use the messaging tool WhatsApp, which added end-to-end encryption for every form of communication available on its platform back in 2016. This ensures that conversations between users and their contacts — whether they occur via text or voice calls — are private, inaccessible even to the company itself.
But several recent posts published to Forbes’ blogging platform call WhatsApp’s future security into question. The posts, which were written by contributor Kalev Leetaru, allege that Facebook, WhatsApp’s parent company, plans to detect abuse by implementing a feature to scan messages directly on people’s phones before they are encrypted. The posts gained significant attention: A blog post by technologist Bruce Schneier rehashing one of the Forbes posts has the headline “Facebook Plans on Backdooring WhatsApp.”
It is a claim Facebook unequivocally denies.
“To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it.”
“We haven’t added a backdoor to WhatsApp,” Will Cathcart, WhatsApp’s vice president of product management, wrote in a statement provided to OneZero and previously posted to Hacker News.“To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise, which is why we are opposed to it.”
WhatsApp is one of the most scrutinized apps in the world, a Facebook spokesman told OneZero in a phone call, adding that any kind of backdoors would be immediately obvious to the security community. There are many security experts looking at WhatsApp on a regular basis, he added.
Although the app is not open-source, security researchers can download the Android application package (APK) and use third-party tools to get readable Java code back, or they can extract the binary code for the iPhone versions and use debuggers (such as IDA Pro) to try to understand how it works.
“I’m sure people are constantly looking at reverse engineering it,” says cryptographer Steve Weis, a fellow at the Aspen Tech Policy Hub and former software engineer at Facebook. “Generally you can assume that people are poking around the binaries.”
While it’s certainly possible for any end-to-end encrypted app to backdoor its own code, doing so without people being able to figure out what’s happening would be extremely difficult.
The accusation leveled against Facebook is that the company plans to embed content moderation and blacklist filtering algorithms directly onto users’ mobile devices, scanning Messenger and WhatsApp messages before and after they are encrypted. The post by Leetaru points to potential future scenarios where the vast majority of phones would include this type of scanning, rendering encryption meaningless.
“Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communication clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once,” Leetaru writes, adding that this would “create a framework for governments to outsource their mass surveillance directly to social media companies.”
How did the rumor start? It has to do with the blogging platform itself and an unrelated presentation detailing potential ways to automate content moderation efforts on social platforms.
Forbes does not typically review blog posts by its contributors, who are not staff writers for the publication. The company did not immediately respond to a request for comment about this. (Disclosure: I was a Forbes contributor myself from July 2015 to January 2017.)
Though Leetaru originally stated in his post that a Facebook spokesperson declined to comment, Facebook tells OneZero this was not the case and that it gave Leetaru “background” information: context meant to inform an article without being quoted directly.
Reached via email, Leetaru stated that Facebook “did not dispute the characterization and pointed to [Facebook CEO Mark] Zuckerberg’s March blog post calling for precisely such filtering.” The post in question, “A Privacy-Focused Vision for Social Networking,” lists a plan for making Facebook more private by focusing on encrypted and ephemeral communication. Although the post states that Facebook might detect “patterns of activity or through other means, even when we can’t see the content of the messages” across apps, it does not specifically refer to client-side filtering of WhatsApp messages or private messaging. In other words, there’s no suggestion from Zuckerberg’s writing that a system is being developed to read user messages.
Following the references in Leetaru’s post led to another post of his about an alleged WhatsApp backdoor. That post linked to a video of a technical talk on Facebook’s developer site about the use of artificial intelligence to keep content that violates Facebook’s policies, such as hate speech, nudity and pornography, off of the network.
The moderation would be performed by content classifiers, which is when a machine learning model is trained to recognize specific images and learns to reliably predict whether or not an image depicts violent content, for example. A Facebook spokesperson said there’s no connection between this type of moderation and private messaging encryption.
“The article is completely off base,” said Weis of the Forbes post. The video being discussed was about filtering content before it’s posted to Facebook in the first place — the app could, for example, detect that an image is pornographic and simply prevent a user from uploading it to the News Feed. “It was never talking about WhatsApp.”
Granted, a user wishing to post whatever they’d like on social media might take issue with this kind of automated moderation on the client side. (Technically, moderation like this already occurs on Facebook’s servers once content is uploaded.) But the important distinction is that it does not represent a backdoor into your conversations on WhatsApp.
Further, Weis says that moderating content on people’s phones is actually a privacy win, if you’re concerned about material being stored on the social network’s servers. “Today if you post a picture that gets sent to Facebook, and then they run their content filtering, and it gets rejected, it gets taken down, but they still have it. In this case, your content will get filtered locally, before it ever gets sent over. So it reduces the amount of information that will be sent to Facebook in the first place.”
Although the Forbes piece raises concern that plaintext copies of moderated messages would be sent to Facebook, “that’s completely filling in the blanks,” Weis said. “Nobody is talking about doing this for WhatsApp, and even if they did, nobody is talking about sending the plaintext to the server.”