Trolls, Bots and Fake News: The Mysterious World of Social Media Manipulation

Once heralded as tools for democratization, social media sites are increasingly seen as a danger to democracy. Jaap Arriens/NurPhoto via Getty Images

If it was once common to hear mass anti-government movements in the Middle East described as "Twitter uprisings" and "Facebook revolutions," today these social media platforms are more likely to be linked to their potential for manipulating public opinions and influencing elections, including the one that saw Donald Trump elected as America's "First Facebook President."

"The fact that I have such power in terms of numbers with Facebook, Twitter, Instagram, etc.," Trump said on CBS' 60 Minutes during the Republican primaries that he would later go on to win. "I think it helped me win all of these races where they're spending much more money than I spent." For Trump's digital media director for the campaign, Brad Parscale, "Facebook was the 500-pound gorilla, 80 percent of the budget kind of thing." Trump's own enthusiasm for the social media giant appears to have waned: "Facebook was always anti-Trump," he tweeted in September 2017.

Now the focus is less on Trump's extensive personal social media following and more on the roles that Facebook and Twitter may have played in alleged Russian interference in the election. Congress is calling on Facebook and Twitter to disclose details about how they may have been used by Russia-linked entities to try to influence the election in favor of Trump.

But despite the much-publicized case in the U.S., the pervasiveness of these political strategies on social media, from the distribution of disinformation to organized attacks on opponents, the tactics remain largely unknown to the public, as invisible as they are invasive. Citizens are exposed to them the world over, often without ever realizing it.

Drawing on two recent reports by the Oxford Internet Institute (OII) and independent research, Newsweek has outlined the covert ways in which states and other political actors use social media to manipulate public opinion around the world, focusing on six illustrative examples: the U.S., Azerbaijan, Israel, China, Russia and the U.K.

It reveals how "Cyber-troops"—the name given to this new political force by the OII—are enlisted by states, militaries and parties to secure power and undermine opponents, through a combination of public funding, private contracts and volunteers, and how bots—fake accounts that purport to be real people—can produce as many as 1,000 social media posts a day.

By generating an illusion of support for an idea or candidate in this way, bots drive up actual support by sparking a bandwagon effect—making something or someone seem normal and like a palatable, common-sense option. As the director of the OII, Philip Howard, argues: "If you use enough of them, of bots and people, and cleverly link them together, you are what's legitimate. You are creating truth."

On social media, the consensus goes to whoever has the strongest set of resources to make it.

The U.S.: Rise of the bots

CEO of Cambridge Analytica Alexander Nix speaks at a conference in New York, September 19, 2016. The company were involved in the latter stages of Trump's campaign and have become a focal point of debate on the role of social media in politics. Bryan Bedder/Getty Images for Concordia Summit

America sees a wider range of actors attempting to shape and manipulate public opinion online than any country—with governments, political parties, and individual organizations all involved.

In its report, the OII describes 2016's Trump vs. Hillary Clinton presidential contest as a "watershed moment" when social media manipulation was "at an all-time high."

Many of the forces at play have been well-reported: Whether the hundreds of thousands of bots or the right-wing sites like Breitbart distributing divisive stories. In Michigan, in the days before the election, fake news was shared as widely as professional journalism. Meanwhile firms like Cambridge Analytica, self-described specialists in "election management," worked for Trump to target swing-voters, mainly on Facebook.

While Hillary Clinton's campaign also engaged in such tactics, with big-data and pro-Clinton bots multiplying in number as her campaign progressed, Trump's team proved the most effective. Overall, pro-Trump bots generated five times as much activity at key moments of the campaign as pro-Clinton ones. These Twitter bots—which often had zero followers—copied each other's messages and sent out advertisements alongside political content. They regularly retweeted Dan Scavino, Trump's social media director.

One high-ranking Republican Party figure told OII that campaigning on social media was like "the Wild West." "Anything goes as long as your candidate is getting the most attention," he said. And it worked: A Harvard study concluded that overall Trump received 15 percent more media coverage than Clinton.

Targeted advertising to specific demographics was also central to Trump's strategy. Clinton spent two and a half times more than Trump on television adverts and had a 73% share of nationally focused digital ads.

But Trump's team, led by Cambridge Analytica for the final months, focused on sub-groups. In one famous example, an anti-Clinton ad that repeated her notorious speech from 1996 describing so-called "super-predators" was shown exclusively to African-American voters on Facebook in areas where the Republicans hoped to suppress the Democrat vote—and again, it worked.

"It's well known that President Obama's campaign pioneered the use of microtargeting in 2012," a spokesperson for Cambridge Analytica tells Newsweek. "But big data and new ad tech are now revolutionizing communications and marketing, and Cambridge Analytica is at the forefront of this paradigm shift."

"Communication enhances democracy, not endangers it. We enable voters to have their concerns heard, and we help political candidates communicate their policy positions."

The firm argues that its partnership with American right-wing candidates—first Ted Cruz and then Trump—is purely circumstantial. "We work in politics, but we're not political," the spokesperson said.

The company is part-owned by the family of Robert Mercer, which was one of Trump's major donors, while Stephen K. Bannon sat on the company's board until he was appointed White House chief strategist (he was dismissed from his post seven months later). According to Bannon's March federal financial disclosure, he held shares worth as much as $5 million in the company. On October 11, it was also revealed that the House Intelligence Committee has asked the company to provide information for its ongoing probe into Russian interference.

But social media manipulation did not begin or end with the election. As early as 2011, the US government hired a public relations firm to develop a "persona management tool" that would develop and control fake profiles on social media for political purposes.

The British parent company of Cambridge Analytica, Strategic Communications Laboratories (SCL), has been a client of the government for years, working with the Department of Defense, and The Washington Post reports that it recently secured work with the State Department.

There is also growing awareness of hundreds of thousands of so-called "sleeper" bots: Accounts that have tweeted only once or twice for Trump, and which now sit silently, waiting for a trigger—a key political moment—to spread disinformation and drown out opposing views.

Emilio Ferrara, an Assistant Research Professor at the University of Southern California Computer Science department, even suggests the possibility of "a black-market for reusable political disinformation bots," ready to be utilitized wherever they are needed, the world over. These fears appeared to be confirmed by reports that the same bots used to back Trump were then deployed against eventual winner Emmanuel Macron in this year's French presidential election.

Azerbaijan: "What-aboutism"

“I am proud that I am Azerbaijani” reads this image on a user’s profile, featuring presidents Heydar and Ilham Aliyev. The account is one example of a nationalist troll, repeatedly retweeting the President and other patriotic content. Arzu Geybulla/OpenDemocracy

Over the years, Azerbaijan's pro-government trolls have become a textbook case of state-level social media manipulation. President Ilham Aliyev has been the country's leader for the past 14 years, and his grip is only tightening. In February, he took the unprecedented step of making his wife vice-president.

Social media has been a part of his presidential strategy since at least 2010, when members of the country's main youth group, IRELI, were instructed to proliferate pro-government opinions online. As troll training-centers multiplied across the country—one source says there were 52 in different towns and cities, funded with government money—a few hundred young volunteer-bloggers became tens of thousands of trained trolls.

At first, they were encouraged to become bloggers, painting a positive picture of the country, but focus slowly switched to email-attacks on critics, managing Wikipedia pages and running promotional campaigns on social media. As the group's then-secretary general, explained in 2011 to the national online news-agency, News.AZ: "Activity and scale of internet users are decisive in this regard. Our objective is to produce young people who can take an active part in the information war."

IRELI's influence began to fade due to internal politics around 2014 but the youth branch of the ruling Yeni (New) Azerbaijan Party took over. Youth organizations are favoured as cybertroops around the world because they are cheap, more adept at social media and easily rewarded with government positions or scholarships.

In Yeni's case, the methods are blunter than their predecessors: The language is more aggressive, violent and degrading, with an emphasis on scale rather than subtlety, with opposition journalists routinely harassed. Personal attacks are typically taken as the best line of defence. Occasionally the ongoing conflict with neighboring enemy Armenia is invoked to drown out online discussions of domestic human rights abuses with so-called "whataboutism".

"IRELI's trolls were more educated," Arzu Geybulla, an Azerbaijani journalist and activist who lives in Turkey, tells Newsweek . "But the government's trolls have become more active and effective, particularly on Twitter. Whenever Azerbaijan is discussed at a conference and there is online activity, they hijack the hashtags and make sure they dominate the debate."

The OII reports that these tactics have been largely successful, with engagement in online political discussion having fallen, but Geybulla, who has been a victim of the trolls herself, says this is not the only story. "Social media platforms are becoming increasingly important for the opposition's attempts to get around Aliyev's authoritarian control. Access to opposition media outlets has been blocked since May," she says.

There is also worrying evidence that the government is finding new roles for social media. Last month, 60 people were reportedly arrested in Azerbaijan in an unprecedented crackdown on LGBT rights, with fake profiles allegedly being set up to locate them.

China: The "50-cent" party

Chinese police officers take part in a security oath-taking rally for the 19th National Congress of the Communist Party, September 27, 2017, an occasion when when government censorship and online presence is at an all time high. Stringer/REUTERS

China is home to probably the first and largest state-run operation of social media manipulation, with a vast network of around two million individuals working to promote the party line. They are popularly known as the "50-cent party," a reference to early claims that they were paid half a Chinese yuan for every post.

A Harvard study estimated "that the government fabricates and posts about 448 million social media comments a year." Of 43,800 pro-regime posts that were analyzed, 99.3% were made by one of the 200+ government agencies.

Rather than trolling opponents or distributing disinformation, Beijing uses its troll army to distract members of the public during key political moments. One classic tactic is to post emotional or far-fetched comments in order to re-direct citizen rage towards that user, diverting attention from the issue itself.

"They try to redirect public attention by producing positive content—what we call cheerleading posts—when a protest or party meeting might be taking place," Professor Jennifer Pan, one of the academics involved in the Harvard study, tells Newsweek. "During these periods, there is a coordinated burst of activity that can drown out organic discussions occurring online."

China's social media strategies came to the fore in Taiwan, which China claims as part of its territory. Shortly after her election in 2016, Taiwan's President Tsai Ing-wen was bombarded with comments on her Facebook page warning the island against independence. It emerged that many of these hostile comments were being sent from mainland China—where Facebook, along with Twitter, is banned—suggesting it could only have been a government endorsed operation.

Indeed, as the banning of Facebook and Twitter shows, China's grip on the internet goes far beyond trolls. It also possesses the most infamous internet censor: the Great Firewall of China, which blocks foreign news outlets, internet tools (including Google search) and mobile apps.

This year, with the approach of the 19th Party Congress—which shuffles the top-ranks of the Chinese leadership—the Propaganda Ministry and Cyberspace Administration has apparently gone into overdrive, issuing more censorship guidelines than in the previous ten years combined.

Israel: A high-tech fight for the moral high-ground

An image of Israeli soldiers is seen on a computer screen through a face recognition programming script, during a cyber security training course at a high-tech park in Beersheba, southern Israel August 28, 2017. Israel's thriving technology sector place it at the cutting edge of online strategies. Amir Cohen/Reuters

With more than 350 official government social media accounts, covering the full range of online platforms from Twitter to Instagram and functioning in three languages—Hebrew, Arabic and English—Israel has one the most professional online operations in the world.

Student volunteers make up the bulk of the Israeli state's online presence, with top-performing students often being awarded scholarships for their work. Unlike in Azerbaijan and China, the strategy is to engage in debate, reinforcing or supporting the government's authority with an optimistic tone, stressing Israel's liberalism compared to its neighbors.

Engagement takes place in the comments section of websites, in online forums, and on social media, with the aim of improving Israel's stature, both at home and abroad. "That is the key to defeating the movements pushing to boycott, divest and sanction Israel," one Israeli politician has explained, referring to the international BDS movement that seeks to apply economic and political pressure on Israel.

"We will get authoritative information out and make sure it goes viral," another official involved in the operation told the Jerusalem Post upon its launch in August 2013. "We won't leave negative stories out there online without a response, and we will spread positive messages. What we are doing is revolutionary."

More recently, in April this year, the Israeli government purchased software that enabled not only a greater scope for monitoring social media but also, Haaretz reported, the officially stated ability to "plant an idea in the debate on social networks, web news sites and forums."

The system also offers a breakdown of users and consequently has the potential to target supporters of the BDS movement or well-known critics of Israel.

Russia: Troll factories

Russia is the defining case of how a powerful authoritarian regime uses social media to control people. Sputnik/Alexei Nikolsky/Kremlin

As early as 2003 there were allegations of Russian propagandists covertly entering chatrooms; but after a series of leaks in 2013 and 2014 the full scale of Russia's current operation became clearer. Sam Wooley, a member of OII's Computational Propaganda research team, told the Guardian: "Russia is the case to look to to see how a particularly powerful authoritarian regime uses social media to control people."

The Internet Research Agency and Nashi are just two of several organizations that train and pay trolls to attack Russian President Vladimir Putin's opponents at home and abroad. The fact that the former is a private company and the latter a Kremlin-backed youth movement, 150,000 members strong, shows the complexity of the Kremlin's strategy.

Some of the cybertroops create online personas and run blogs, weaving propaganda into non-political musings. But most are known for their aggressive persistence. They target journalists and political dissidents with the hope of either taking them off the internet or cowing them into silence.

One investigative reporter from Finland who wrote on the Internet Research Agency's online operation became the victim of a vicious and frightening retaliatory campaign. Meanwhile leaders of Nashi have sent around lists of human rights activists to target, declaring them "the most vile of enemies."

According to the 2013 leaks, bloggers who are employed by Internet Research Agency have to maintain six Facebook accounts and publish at least three posts a day. Those on Twitter are supposed to have at least ten accounts and tweet 50 times a day. Individuals have specific, personalized targets for followers and the level of engagement. They often work in so-called "troll-factories," buildings or basements where hundreds of employees are given such targets. It is estimated that 45% of Twitter activity in Russia is managed by such accounts.

The operation is a global one—not merely in defending Putin abroad, but also as a part of Russia's foreign policy goals—meaning that poor English is a weakness. Consequently, a Buzzfeed investigation into Russia's troll network revealed how English teachers are hired by these organizations to teach the cybertroops proper grammar for their interactions with Western audiences.

There are also workshops on so-called "politology," The New York Times reports, which ensure that the trolls are fluent in the particular pro-Russia line on current events.

United Kingdom: Brexit, bots and big-data

The Brexit referendum 2016 was the moment that social media became not only a battleground but a tool in British politics. Tolga Akmen/REUTERS

The 2016 Brexit referendum also provided an intense moment for strategies of manipulation on social media. In the months before the vote, roughly one third of all traffic on Twitter was from automated bots, which were almost entirely pro-Leave.

It is notable that bots were not exclusive to the pro-Brexit movement. In the aftermath of the vote, an online petition calling for a second referendum attracted more than 3.7 million new signatures in one weekend. While this was initially interpreted as a sign of voters having changed their mind, it then emerged that the petition had 42,000 signatures from Vatican City (population 800) and almost 25,000 from North Korea (where Internet access is extremely limited).

"It's hard to measure exactly how effective these strategies are," Professor Susan Banducci, a social scientist at the University of Exeter, tells Newsweek. "People don't see a fake news story, for example, and believe it straight away—it's part of a process, and the big political parties are still adapting to this environment." People tend to believe what's familiar, so the more they see a story or a particular claim, the more likely they are to accept it as true the next time they see it.

Independent of the referendum, the British state has taken to some of these tactics itself. In 2015, the British Army announced that its 77th Brigade would "focus on non‐lethal psychological operations using social networks like Facebook and Twitter to fight enemies by gaining control of the narrative in the information age."

Its goal is to use "dynamic narratives" to combat the political propaganda disseminated by terrorist organizations, shaping public opinion in the process.

The existence of the Joint Threat Research Intelligence Group was also revealed by Edward Snowden's leaks in 2014, similarly dedicated to combatting terrorism. Yet its tactics include, in the words of the leaked document, "uploading YouTube videos containing persuasive messages; establishing online aliases with Facebook and Twitter accounts, blogs, and forum memberships … as well as providing spoof online resources."

As a clearer picture of these activities around the world emerges and their threats to open society become increasingly apparent, the media tools enabling these players are fast becoming anything but social.