Elon Musk Might Be the Only Person Who Can Fix Twitter. Here's How | Opinion

Twitter has major problems—though not the ones hysterical liberals have been braying about. There are problems even those not on the Left can easily see: It isn't profitable. It has a poor track record of innovation. User growth has stagnated. And many of those users who remain don't trust the platform's content moderation practices, which have a documented liberal bias.

But if anyone can fix Twitter, it might well be the innovator who has managed to grab a significant market share inside of two industries that are notoriously difficult for newcomers to break into: space and automobiles.

Elon Musk says he wants to bring "free speech" to Twitter because he is "against censorship that goes far beyond the law." It's an admirable goal, but one that will prove hard to implement in practice. Twitter is a global company with assets, staff, and offices in 23 countries. It will need to comply with national laws, including transnational disparities in law frequently make content that is legal in one territory illegal in another.

After all, there is no universally accepted definition of free speech. And Twitter, like all social media platforms, has often struggled on this front, especially given the rapid response times that its teams face in order to review problematic material so that it does not go viral. While it's clearly made mistakes, it's up against a real challenge, namely, the fact that social media platforms face opposing demands: They must respect national laws, uphold human rights like the right to privacy or security, but also minimize restrictions to freedom of expression.

Elon Musk's China Connections Through Tesla
Tesla CEO Elon Musk at the Vanity Fair Oscar Party in Beverly Hills, California, on February 22, 2022. Tesla’s electric-vehicle production and revenue in China are speculated as points of potential leverage over Elon Musk as the billionaire prepares to purchase Twitter. Pascal Le Segretain/Getty Images

It's a tall order. And unlike other social media platforms, Twitter seems to be making a good-faith effort to protect its users. Ironically, the same teams that Musk believes stifle free speech—the trust and safety teams who review flagged content and the legal counsel developing policy and reviewing conflicts with law—have routinely gone to court to advocate for Twitter users' free expression or right to anonymity. And it's been costly; it's much easier and cheaper just to overzealously delete reported content, like YouTube does.

If Musk does buy Twitter, he needs to sort the good from the bad. After all, no one wants to be on a social media site with no content moderation, being exposed to spam and pornography day in and day out. What we want is content moderation that suits our preferences.

And that's key to Musk's success. If he wants to fix Twitter, he should put the power in our hands, the users, to choose our own independent, third-party moderator to filter our individual Twitter experiences.

If there was a marketplace of content moderation providers, Twitter could get out of the business of being perceived as censoring content and let its users decide what content they see or don't see.

It's a concept inspired by an idea Richard S. Whitt, a long-time Google executive who is now a fellow with the Mozilla Foundation, proposed in an article in the Colorado Technology Law Journal last year. Whitt suggested that we think of our personal data as "digital lifestreams" which can be managed by "communities as stewards under commons and fiduciary law-based governance" mechanisms.

Fiduciary law is the doctrine of unequal relationships: doctors have a duty of care to their patients; attorneys to their clients; teachers to their students. Its principles are universally-understood, existing in canon law, Roman law, classical Islamic law, classical Jewish law, European civil systems, English law, Chinese law, Indian law, and Japanese law.

We should apply this centuries-old concept of fiduciaries to the content moderation practices of Twitter.

When we sign up for Twitter, we should be able to choose an entity that we trust to filter or leave uncensored our social media feed with regards to our expectations. This entity would have a legal duty to promote our interests, and if they fail us, we can simply change our content moderator provider with the touch of a button or have access to real legal recourse.

This would be a game changer for Twitter. Should Musk buy Twitter and implement this suggestion, he won't need to fear competition in the future. After all, who would leave Twitter for Parler or Truth Social, both of which have barely any users, if you could remain on Twitter, benefiting from the network effects of its large number of contributors, while enjoying an experience which is tailored to one's individual cultural sensitivities?

The public writ-large already doesn't believe that what they see on Twitter is moderated fairly. In an age characterized by an absence of consensus on even basic tenets of truth, maybe we can all agree on this: We should each be in charge of deciding who gets—or doesn't get—to fairly and responsibly moderate the material we see on Twitter.

Ayden Férdeline is a public interest technologist and a former technology policy fellow with the Mozilla Foundation. He researches how digital policy-making processes around the world can become more representative and inclusive. He is based in Berlin. Follow him on Twitter @ferdeline.

The views in this article are the writer's own.