Instagram: New Anti-Bullying Filter Will Purge Threats, Personal Attacks

Instagram has bulked up its machine learning technology to automatically purge comments that contain toxic bullying content, it announced on Tuesday. The update has rolled out to its global userbase and is switched on by default.

The Facebook-owned photo sharing application, which has more than 500 million daily active users, said in a blog post that the move will "filter bullying comments intended to harass or upset people in the Instagram community." In a separate update, it confirmed that a video chat feature was now being tested.

Kevin Systrom, who co-founded Instagram with software engineer Mike Krieger, said the anti-bullying technology would build upon the offensive comment filter that was first introduced in June last year to hunt down divisive comments.

"This new filter hides comments containing attacks on a person's appearance or character, as well as threats to a person's well-being or health," Systrom wrote. "The new filter will also alert us to repeated problems so we can take action." He stressed the feature can be disabled in the Comment Controls center in the app.

Instagram will expand its policies to better protect young public figures, it said. "Since Mike and I founded Instagram, it's been our goal to make it a safe place for self-expression and to foster kindness within the community," Systrom said. "This update is just the next step in our mission to deliver on that promise."

The app's users can expect updates to "Stories" and "Explore." Soon, content in the explore tab will be organized into topics so that specific interests will be highlighted, while new face filters, text styles and stickers are also on the horizon. Instagram said that its video chat capabilities would roll out "in the coming weeks."

Instagram
Instagram's video chat is testing now and will roll out globally soon. Instagram Press