Facebook Sorted Countries Into 'Tiers' to Decide When to Interfere With Election Issues

Facebook allegedly sorted countries into "tiers" to determine which needed the most protection in the company's attempt to limit misinformation, hate speech and violence.

Documents obtained by The Verge allegedly showed the social media giant organized countries into four categories.

For countries in the highest tiers, Facebook offered an array of services including translating its standards into the country's language and building AI classifiers to detect hate speech and misinformation in said language. Staffing teams would regularly analyze viral content and respond to hoaxes and incitement of violence, according to The Verge.

But in other countries, those safeguards did not exist. In high-risk designated countries such as Ethiopia, Myanmar and Pakistan, Facebook did not even have misinformation classifiers, The Verge reported.

"Tier Zero" was the top priority group and included the United States, Brazil and India. Facebook allegedly created war rooms and dashboards to analyze the network. They would let election officials know of any problems, The Verge reported.

The next category, tier one, included Germany, Indonesia, Iran, Israel and Italy. They were allegedly given similar resources during election seasons, but fewer resources the rest of the time The Verge reported.

Tier two included 22 countries, which did not have the war rooms. Every other country was in tier three. Facebook reviewed election-related material in these countries if it was flagged by moderators but would otherwise not intervene, according to The Verge.

A Facebook company spokesperson said in a statement to Newsweek that the company has dedicated teams working to stop abuse on the platform in countries where there is a heightened risk of conflict and violence.

The statement said Facebook has global teams with native speakers reviewing content in more than 70 languages and experts on humanitarian and human rights issues who have made progress on issues such as hate speech.

"We know these challenges are real and we are proud of the work we've done," the statement said.

Facebook's Director of Human Rights Miranda Sissons told The Verge that Facebook's practices reflected those suggested by the United Nations.

The company determines which countries have the highest risk of harm by looking at long-term conditions and historical context, how much the use of their products impact a country and current events in the country, according to a Facebook news release from Saturday.

In the news release, Facebook touted hiring more people with language, country and topic expertise in the past few years, specifically pointing at hiring more people with experience in Myanmar and Ethiopia.

Facebook has come under fire in recent weeks over allegations that it does not do enough to prevent the spread of misinformation and protect the well-being of its users.

Whistleblower Frances Haugen accused the company of placing profit over the public good. She has said it is "substantially worse" at Facebook than other social media platforms. Among her allegations, she has said Facebook scrapped its civic integrity team after the 2020 presidential election.

In testimony before U.K. lawmakers on Monday, she said Facebook is "unquestionably" making hate worse.

Facebook
Facebook allegedly sorted countries into tiers in its efforts to combat misinformation and hate speech. Here, the company’s logo is seen on a TV screen in Paris, France in September 2019. Chesnot/Getty Images