British Politicians Call for Crackdown on Social Media Sites, Saying They Failed to Provide 'Duty of Care'

British politicians have said there should be greater regulation of social media companies, citing a failure to tackle COVID-19 misinformation since the pandemic began.

This follows calls from President Donald Trump in May to remove legal protections afforded to social media platforms after Twitter started fact checking and restricting his tweets. The president signed an executive order targeting the "immense" powers wielded by Twitter, Facebook, Instagram and YouTube.

Across the pond, British politicians are now calling for a new "online harms regulator" to oversee and help combat what the United Nations (UN) had previously described as an "infodemic of misinformation" during the novel coronavirus health crisis. The report has been released by the U.K government's Digital, Culture, Media and Sport Committee (DCMS).

"The proliferation of dangerous claims about COVID-19 has been unstoppable," said Julian Knight MP, who is the chair of the DCMS committee. "Leaders of social media [firms] have failed to tackle the infodemic of misinformation. Evidence that tech companies were able to benefit from the monetization of false information and allowed others to do so is shocking.

"We need robust regulation to hold these companies to account. The coronavirus crisis has demonstrated that without due weight of the law, social media companies have no incentive to consider a duty of care to those who use their services."

According to the DCMS report, misinformation about COVID-19—which continues to spread in several U.S. states—was "allowed to spread virulently" across each of the social media platforms, fueling the spread of hoax treatments and conspiracy theories about 5G technology that later led to real-world attacks on engineers.

It accused tech firms of exploiting business models that "disincentivize" action against misinformation while letting "bad actors" monetize misleading content.

"The need to tackle online harms often runs at odds with the financial incentives underpinned by the business model of tech companies," the report said. "The role of algorithms in incentivising harmful content has been emphasised... consistently."

It noted the more social media users engage with posts, the more platforms—and their algorithms—push similar content into feeds, aiding data collection and ads.

"We know that novelty and fear (along with anger and disgust) are factors which drive engagement with social media posts; that in turn pushes posts with these features further up users' [feeds]—this is one reason why false news can travel so fast," it said.

"This is opposite to the corporate social responsibility policies espoused by [the] tech companies... the more people engage with conspiracy theories and false news online, the more platforms are incentivised to continue surfacing similar content."

In the U.S., pressure on social media companies ramped up in May after Trump's executive order against "censorship" threatened to revoke Section 230 of the Communications Decency Act, which protects the firms against legal action based on the content that is uploaded—and then spread—by their billions of users.

The May 28 order read: "Twitter, Facebook, Instagram and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete or disappear information; and to control what people see or do not see."

It was published as Twitter started fact checking—and later restricting—Trump's posts, including for "glorifying violence" and sharing false voting information. "[Twitter] targeted Republicans [and] the President of the United States," Trump wrote. "Section 230 should be revoked by Congress. Until then, it will be regulated!"

The same month, Facebook faced widespread criticism after deciding not to remove a post from the president that threatened the use of violence against citizens.

CEO Mark Zuckerberg later said Facebook would also start to label posts that violated community guidelines but remained newsworthy. Previously, Facebook said it would notify users who came into contact with COVID-19 misinformation.

The report said: "Misinformation [is] often spread by influential and powerful people who seem to be held to a different standard to everyone else. Freedom of expression must be respected but it must also be recognised that currently tech companies place greater conditions on the public's freedom of expression than that of the powerful."

Facebook previously told Newsweek that it has taken a variety of steps to combat the spread of COVID-19 misinformation and policy-breaking material.

A spokesperson said: "We have removed hundreds of thousands of pieces of COVID-19 misinformation that could lead to imminent harm including posts about false cures, claims social distancing measures do not work, and that 5G causes coronavirus.

"During March and April, we put warning labels on about 90 million pieces of COVID-19 related misinformation globally, which prevented people viewing the original content 95 percent of the time. We've directed over two billion people to resources from the WHO and other health authorities through our COVID-19 Information Center and pop-ups."

Mark Zuckerberg
Facebook CEO Mark Zuckerberg speaks during the annual F8 summit at the San Jose McEnery Convention Center in San Jose, California on May 1, 2018. JOSH EDELSON/Getty