Twitter Updates Their Rules of Conduct to Protect Users from Dehumanization

Twitter has updated its rules relating to hateful conduct in an attempt to create a more inclusive environment for its users.

Our rules continually evolve to help keep people safe. Today, we’re expanding our hateful conduct policy to address language that dehumanizes people on the basis of race, ethnicity, or national origin.

— Twitter Safety (@TwitterSafety) December 2, 2020

The company announced the update in a tweet from its Twitter Safety account. The update appears to prioritize dehumanizing language about large groups of people.

"Our rules continually evolve to help keep people safe. Today, we're expanding our hateful conduct policy to address language that dehumanizes people on the basis of race, ethnicity, or national origin," the tweet said. A followup tweet indicated that tweets dehumanizing people on the "basis of religion, caste, age, disability, or disease" were already not allowed.

This policy already prohibits language that dehumanizes people on the basis of religion, caste, age, disability, or disease.

Research shows that dehumanizing speech can lead to real-world harm, and we want to ensure that more people—globally—are protected.

— Twitter Safety (@TwitterSafety) December 2, 2020

Examples of tweets that violate the new policy include messages like, "There are too many [national origin/race/ethnicity] maggots in our country, and they need to leave" or "All [national origin] are cockroaches who live off of welfare benefits and need to be taken away."

In the blog post, which was first posted in July 2019, the company said that it took public feedback, expert opinions, and internal ideas into consideration when discussing hateful conduct.

As part of its update, Twitter said that the group of third party experts from around the globe. It said that the experts "helped [Twitter] better understand the challenges we would face," as well as answer questions like: "How can—or should—we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?" or "How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?"

Twitter's blog explained that tweets that violate this policy will be removed when reported. "We will also continue to surface potentially violative content through proactive detection and automation," the post said.

While tweets will be deleted, repeat offenders may face more intensive punishment. "If an account repeatedly breaks the Twitter Rules, we may temporarily lock or suspend the account," the blog explains. The blog also links to the help center's enforcement page, which explains that its "most severe enforcement action" is a permanent suspension, where accounts can't create new accounts; although they can appeal the suspension.

The post ended with links to studies from Nick Haslam and Michelle Stratemeyer as well as Dr. Susan Benesch about the connection between dehumanizing language and harm it can cause offline.

The update is the latest move to expand Twitter's policies on hateful conduct. The company's policy already prohibited violent threats.

The help center's page on hateful conduct policy warns against violent threats against individuals or groups, offensive language, hateful imagery, and more.

Twitter
In this photo illustration, a Twitter logo is displayed on a mobile phone on August 10, 2020, in Arlington, Virginia. OLIVIER DOULIERY/AFP/Getty

Update 12/2/20 2:54 p.m. EST: An earlier version of this story used an older example of a tweet violating the policy on hateful conduct.


Correction
: The headline of this story has been updated to more accurately reflect the information.