Four Ways Google Plans to Fight Radicalization and Extremism

Google has announced a raft of new measures aimed at combatting extremist material.

The internet giant has been one of several technology companies—including Facebook and Twitter— under scrutiny for how they deal with extremist content posted on their sites.

Following recent attacks in London and Manchester, British Prime Minister Theresa May said that the internet and “big companies” were providing “the safe space” that extremist ideologies need to thrive. A report by British members of parliament also accused Google of “dreadful” delays in removing neo-Nazi propaganda from its video sharing platform, YouTube.

The internet behemoths have come out fighting, saying they are doing all they can to fight extremism. And Google’s senior vice-president and general counsel, Kent Walker, laid out four new strategies the company is deploying, in a post for the Financial Times on Sunday.

1. More technology

Walker said that Google already used “video analysis models” that helped it identify more than 50 percent of terrorism-related content that it had removed over the past six months. But he said that the fluid nature of such content sometimes made it difficult to ban: The same video may be used by reputable news organizations for reporting as by isolated individuals for promoting radical ideologies.

Walker added that Google was investing more resources into training new “content classifiers”—including the use of advanced machine learning.

2. More experts

YouTube, which is owned by Google, already runs a program called Trusted Flagger. The program gives additional resources and tools to users who consistently and accurately report content that violates the site’s guidelines; YouTube says that Trusted Flaggers are accurate more than 90 percent of the time.

Walker said that 50 expert NGOs would be joining the 63 organizations already part of the program, and that grants would be provided to support their work. “This allows us to benefit from the expertise of specialized organizations working on issues like hate speech, self-harm and terrorism,” said Walker.

Neo-Nazi video YouTube A video from the German neo-Nazi music band Lunikoff is seen on the website of YouTube in Berlin, Germany, on August 27, 2007. Google and other internet companies have come under scrutiny for purportedly failing to police extremist content online. Sean Gallup/Getty

3. Less leeway for dubious religious or supremacist content

In the debate over online extremism, Google has continually struggled with seeking to provide a balance between freedom of expression and tackling dangerous content. Now, the company appears to be coming down harder: Walker said that Google would seek to minimize the audience for videos that, while not violating the company’s policies, were dubious in content—“for example, videos that contain inflammatory religious or supremacist content.” Walker said that such videos would appear behind a warning, would not be monetized, and would have comments and user endorsements switched off. “That means these videos will have less engagement and be harder to find,” he said.

4. Redirect potential extremists elsewhere

Walker said that Google would be working with Jigsaw, a Google-founded incubator dedicated to tackling online problems including extremism, to shepherd potential extremists or would-be recruits for the Islamic State militant group (ISIS) away from radical content. “This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining,” said Walker.

He added that Google would be working with other tech leaders—including Facebook, Microsoft, and Twitter—to share solutions and support smaller companies in fighting extremism.

Join the Discussion