Facebook Doesn't Amplify Hate, Supports Safety Legislation, Exec Tells British Lawmakers

A Facebook executive asserted that the platform does not amplify hate and supports safety legislation while being questioned by British lawmakers concerned over recent reports of online dangers, the Associated Press reported.

The questioning came as the U.K. government works to refine online safety legislation aimed at curbing the power of social media companies and better protecting users.

Antigone Davis, Facebook's head of global safety, said that she disagrees with critics who accuse the platform of amplifying hate, citing societal issues instead. She also claimed that the company uses artificial intelligence to take down content that can be divisive but didn't specify the extent to which the systems are able to pinpoint such harmful content, AP reported.

"I cannot say that we've never recommended something that you might consider hate. What I can say is that we have AI that's designed to identify hate speech," Davis said.

She said Facebook supports the U.K.'s safety legislation and defended the company's handling of internal research on how Instagram can harm teens by encouraging eating disorders or suicide, AP reported.

For more reporting from the Associated Press, see below.

Facebook Exec Testifies UK
Antigone Davis, Facebook's head of global safety, said that she disagrees with critics who accuse the platform of amplifying hate. Above, Davis testifies remotely before a hearing of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security to examine protecting children online, focusing on Facebook, Instagram, and mental health harms, on Capitol Hill in Washington, D.C., on September 30, 2021. (Photo by TOM BRENNER/POOL/AFP via Getty Images)

Representatives from Google, Twitter and TikTok also answered questions from a parliamentary committee scrutinizing the British government's draft legislation to crack down on harmful online content.

It comes days after the companies testified before American lawmakers and provided little firm commitment for U.S. legislation bolstering protection of children from online harm, ranging from eating disorders, sexually explicit content and material promoting addictive drugs.

Governments on both sides of the Atlantic want tougher rules for protecting social media users, especially younger ones, but the United Kingdom's efforts are much further along. U.K. lawmakers are questioning researchers, journalists, tech executives and other experts for a report to the government on how to improve the final version of the online safety bill. The European Union also is working on digital rules.

Facebook whistleblower Frances Haugen told the U.K. committee this week that the company's systems make online hate worse and that it has little incentive to fix the problem. She said time is running out to regulate social media companies that use artificial intelligence systems to determine what content people see.

Haugen was a Facebook data scientist who copied internal research documents and turned them over to the U.S. Securities and Exchange Commission. They also were provided to a group of media outlets, including The Associated Press, which reported numerous stories about how Facebook prioritized profits over safety and hid its own research from investors and the public.

In one of several pointed exchanges Thursday before the parliamentary committee, Scottish lawmaker John Nicolson told Davis that "all this rather suggests that Facebook is an abuse facilitator that only reacts when you're under threat, either from terrible publicity or from companies, like Apple, who threaten you financially."

Lawmakers pressed Facebook to provide its data to independent researchers who can look at how its products could be harmful. Facebook has said it has privacy concerns about how such data would be shared.

"It's not for Facebook to set parameters around the research," said Collins, the committee chairman.

The U.K.'s online safety bill calls for a regulator to ensure tech companies comply with rules requiring them to remove dangerous or harmful content or face penalties worth up to 10 percent of annual global revenue.

British lawmakers are still grappling with thorny issues such as ensuring privacy and free speech and defining legal but harmful content, including online bullying and advocacy of self-harm. They're also trying to get a handle on misinformation that flourishes on social media.

Representatives from Google and its YouTube video service who spoke to U.K. lawmakers Thursday urged changes to what they described as an overly broad definition of online harms. They also appeared virtually, and the tenor of lawmakers' questions wasn't as harsh as what Facebook faced.

Facebook Scrutiny
Facebook whistleblower Frances Haugen told British lawmakers Monday that the social media giant stokes online hate and extremism, fails to protect children from harmful content and lacks any incentive to fix the problems, providing strong momentum for efforts by European governments working on stricter regulation of tech giants. Above, Haugen leaves after giving evidence to the joint committee for the Draft Online Safety Bill, as part of British government plans for social media regulation, at the Houses of Parliament in London on October 25, 2021. Matt Dunham/AP Photo