Selling to Extremists: YouTube Ran Ads for Major Brands on Channels Promoting Nazis, Pedophilia, Propaganda

A picture illustration shows YouTube on a cell phone, in front of a YouTube copyright message regarding a video on an LCD screen. The platform is coming under fire for its advertising. REUTERS/Dado Ruvic

Facebook, LinkedIn and Netflix are among the hundreds of major companies that may have unknowingly financed extremist YouTube channels promoting conspiracy theories, white supremacy, pedophilia and propaganda, after the Google-owned video platform displayed their advertisements on the accounts.

According to CNN, other brands whose ads were shown on controversial channels included Amazon, Cisco, Mozilla, Under Armour, the Washington Post, the New York Times, Nissan, 20th Century Fox and the Jewish National Fund.

Advertising funded by the U.S. government was reportedly shown on channels "promoting North Korean propaganda," while marketing for the two major U.S. newspapers appeared on far-right channels including InfoWars, the conspiracy theory outlet spearheaded by outspoken talk show host Alex Jones.

Some of the companies have said they will temporarily pull all YouTube advertising until the issue is resolved, while others are currently investigating the situation.

Under Armour made the decision to halt its YouTube relationship after its ads were shown on a white nationalist channel. Mozilla and 20th Century Fox ads reportedly ran on a (now deleted) channel known for promoting Nazism, CNN revealed. The owner of that channel claimed the takedown was a violation of free speech.

Ads for "The Genius of Play," a children's development campaign by the Toy Association, pulled messaging after it was found on a profile known to "promote pedophilia." Marketing linked to U.S. government entities, including the Department of Transportation, Customs and Border Protection and US Coast Guard Academy, was discovered on North Korean propaganda channels such as Red Star TV, which claims to receive official "information support" from the reclusive regime.

YouTube, which has more than one billion users, almost one-third of all people on the internet, lets channels apply to become monetized after they reach a certain level of popularity. Brand advertising can be filtered using demographics, keywords and topics, and channels earn a slice of the ad revenue from the clips.

But it is far from the first time the Google platform has been accused of promoting undesirable content to viewers, especially via its internal algorithms.

Despite pledges to be better, brands including 20th Century Fox and Nike previously fled the service after ads were shown on InfoWars. In November 2017, Buzzfeed exposed how "child exploitation videos" had been uploaded to the site.

A Google spokesperson told Newsweek: "We have partnered with our advertisers to make significant changes to how we approach monetization on YouTube with stricter policies, better controls and greater transparency.

"When we find that ads mistakenly ran against content that doesn't comply with our policies, we immediately remove those ads."

The statement added: "We know that even when videos meet our advertiser friendly guidelines, not all videos will be appropriate for all brands. But we are committed to working with our advertisers and getting this right."

In February this year, YouTube revealed in a blog post it would bulk up its content moderation staff to "over 10,000" people. It said it would work to improve enforcement using both human review and machine learning tech that, at least in theory, will be able to scan for policy-breaking material quicker than the human eye.

Silhouettes of mobile device users are seen next to a screen projection of Youtube logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration