Messenger Privacy: Facebook Is Scanning All Your Personal Chats, Here's Why

Facebook CEO Mark Zuckerberg confirmed this week that the website has systems in place to automatically scan personal chats sent via its Messenger app, fueling fears the social network has the ability to snoop on the secrets of its users.

As with other online chat and email services, Facebook uses automated technology that can search for malware, child abuse images and other content deemed to break its internal policies. It scans all images and links sent via Messenger and, if flagged, its moderation team will then read the chat, Bloomberg reported.

“On Messenger, when you send a photo, our automated systems scan it using photo-matching technology to detect known child exploitation imagery or when you send a link, we scan it for malware or viruses,” a Facebook spokesperson said, adding that the automated tools were designed to “rapidly stop abusive behavior.”

3_9_Facebook_Logos Facebook logos are displayed in Vertou, France, in December 2016. Facebook confirmed this week that it has systems in place to scan personal chats sent via its Messenger app automatically. LOIC VENANCE/AFP/Getty Images

Facebook stressed it does not use the content of chats for advertising. The Messenger app has the option for encryption, but it is switched off by default. WhatsApp, which is owned by Facebook, comes bundled with end-to-end encryption as standard, meaning messages are kept secure between sender and receiver.

The chat-snooping process came to light following an interview with Zuckerberg, published by Vox on Monday. The CEO confirmed experts from Facebook were able to stop “sensational messages” being sent that related to conflict in Myanmar.

“People were trying to use our tools in order to incite real harm,” he said. “Now, in that case, our systems detect that that’s going on. We stop those messages from going through. But this is certainly something that we’re paying a lot of attention to.”

One piece of tech the platform uses is called PhotoDNA, co-created by Microsoft, which can “proactively” detect child abuse material and stop it from being uploaded. In some cases, Facebook said it can be used to curb the spread of revenge porn.

The admittance came as executives from the California-based company were facing tough questions over the firm’s data protection and privacy policies in the wake of the Cambridge Analytica scandal involving the alleged abuse of roughly 87 million accounts. On Wednesday, it confirmed all users potentially had their data scraped.

The email-scanning function has similarities to the technology used by Google to automatically search Gmail for suspicious material. Last year, it changed its terms so that it would no longer tailor the findings for advertising purposes, but it acknowledged it would still keep a watchful eye over chats to stop malware and illegal content.

On Friday, Facebook was thrown headfirst into another scandal after TechCrunch revealed old chats sent by Zuckerberg had been purged from their inboxes. Facebook said in a statement they had been purged for security reasons.

“After Sony Pictures’ emails were hacked in 2014 we made a number of changes to protect our executives’ communications,” a spokesperson told TechCrunch. “These included limiting the retention period for Mark’s messages in Messenger,” it added. “We did so in full compliance with our legal obligations to preserve messages.”