Big Tech's Art of Making Up Rules as It Goes Along | Opinion

On May 5, the Facebook Oversight Board published its decision on Donald Trump's account. The board wrote, "Facebook cannot make up the rules as it goes." Tech executives criticized this notion. "This is literally the opposite of what every company must do," tweeted former Twitter CEO Dick Costolo. "Platforms working to battle new misinformation campaigns, new threats, new abuse, MUST make up the rules as they go."

On June 4, Facebook published its response to the Oversight Board: Trump is suspended for two years. Why two years? Because it is "the highest penalty available under the new enforcement protocols." Facebook explained, "We regularly review our policies and processes in response to real-world events."

A few days beforehand, Facebook announced it would "no longer remove claims that COVID-19 is man-made." The company will continue "to work with health experts to keep pace with the evolving nature of the pandemic and regularly update our policies as new facts and trends emerge."

Facebook's announcements show that not only can the company make up the rules as it goes, it believes it should make up the rules as it goes.

The platforms initially tried not to constantly police users' speech, but as the internet got bigger, so did content moderation challenges. In 2008, Facebook's "community standards" were one page long and not very specific. According to a Facebook employee who worked in the Site Integrity Team, all this page basically said was "Nudity is bad. So is Hitler." Today, it is a document of about 50 pages (if you print it off). In bullet points and very specific if/then statements, it spells out a sort of First Amendment for the entire globe.

The platforms' data practices have evolved as well. The late 1990s were a simpler time for Google, which at first offered only 600 words to explain how it was collecting and using personal information. Over the past 20 years, that same privacy policy has been rewritten into a sprawling 4,000-word explanation.

The Atlantic's Alexis Madrigal addressed the policies' inconsistency in 2019:

This week, YouTube's CEO, Susan Wojcicki, tried to explain her company's actions at the Code Conference. She mentioned the word policies 14 times. "We need to have consistent policies," she said. "They need to be enforced in a consistent way. We have thousands of reviewers across the globe. We need to make sure that we're providing consistency." Of course, the policies are always changing and can be revisited at any time, and yet these inconsistent rules will be enforced consistently. It's a mess.

As the great Inigo Montoya said, "You keep using that word; I do not think it means what you think it means." If a company is constantly adapting its policies to new realities, the policies are not really consistent.

Nonetheless, for large tech companies, policies are a common excuse to trot out in every possible scandal.

This photograph taken on October 26, 2020
This photograph taken on October 26, 2020 shows the logo of US social network Twitter displayed on the screen of a smartphone and a tablet in Toulouse, southern France. LIONEL BONAVENTURE/AFP via Getty Images

In November 2017, addressing the revelation of disturbing and exploitative videos aimed at children ("Elsagate"), YouTube said it "plans to continue to evolve its policies alongside the bad actors who will inevitably attempt to keep posting disturbing content."

When a CNN investigation in 2018 found that ads from more than 300 companies and organizations ran on YouTube channels promoting white nationalists, Nazis, pedophilia, conspiracy theories and North Korean propaganda, a YouTube spokeswoman said in a statement: "When we find that ads mistakenly ran against content that doesn't comply with our policies, we immediately remove those ads.... We are committed to working with our advertisers and getting this right."

Tech companies' ad hoc decision making, necessary as it may be, is not sufficient when their policies are unclear, poorly explained or, even worse, not appropriately implemented throughout the years.

For example, BuzzFeed News revealed that people were putting bestiality thumbs on children's videos. The technology and business editor, John Paczkowski, shared his astonishment with me: "We talked to YouTube, which was, understandably, as horrified as we were when we found it. They were going to implement a lot of stuff to resolve this crazy issue. Eight months after that story ran, we went and checked, and there were more bestiality thumbs. You would think that the company would be bending over backward to solve that problem since that is never a headline that you want to be associated with. But, we easily found those horrific things again."

YouTube's response? It has "worked to aggressively enforce our monetization policies.... We recognize there's more work to do, and we're committed to getting it right." Sound familiar?

Facebook came up with its own term to describe the never-ending challenge of changing its rules: arms race. In 2018, two top Facebook executives stated, "We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It's an arms race, and we need to constantly improve too." A month later, Mark Zuckerberg said, "While we've made steady progress, we face sophisticated, well-funded adversaries. They won't give up, and they will keep evolving. We need to constantly improve and stay one step ahead."

Revelations regarding Russia's election meddling received the same responses. "Staying ahead of those who try to misuse our service is an ongoing effort led by our security and integrity teams, and we recognize this work will never be done," said Facebook. Twitter promised to continue its efforts to protect "against bad actors and networks of malicious automation and manipulation" and added, "our work on these issues will never be done."

The above responses are from 2017. Facebook's latest response to the Oversight Board, in June 2021, included this conclusion: "Our work to improve Facebook is never complete, and we continually review our policies and practices in the face of evolving threats, changing tactics by malicious actors, and new situations in the world."

The repetition has become absurd. Those of us who chronicle the companies' statements live in a tech-response "Groundhog Day." Time and again, companies leave users to navigate an ever-expanding maze alone. These companies are innovative when it comes to their products. It's about time for their responses to be more creative as well.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication.

The views expressed in this article are the writer's own.