Donald Trump's Refusal to Accept Defeat Fuels Violent Threats on Social Media

As President Donald Trump pushes on in the fruitless effort to overturn his election defeat to President-elect Joe Biden, undermining American democracy in the process, some of his fiercest supporters on social media are intensifying their violent rhetoric.

Trump's critics warn that even on his way out of the White House he is sowing the seeds for extremist right-wing violence, which has increased in scale and severity since he took office. The FBI is increasingly alert to the domestic right-wing terror threat.

There is no evidence for Trump's repeated assertion that widespread electoral fraud cost him the presidency, yet millions of his supporters passionately believe these false claims. As it becomes clearer to them that he will exit the White House in January, the heat is rising.

On social media, users describing themselves as Trump supporters are encouraging each other with fresh conspiracy theories and outrage as events unfold. Threats of individual violence are common. So too is incitement to mass violence.

Trump has been widely accused of stochastic terrorism—publicly demonizing people or groups and tacitly, if not openly, encouraging violence against them—during his term. Multiple terrorist acts or plots have been linked to his ideology and committed by self-declared supporters.

This all existed before the election, but Trump's defeat, crumbling legal defense, and inflammatory comments have seemingly left some of his fans with no hope beyond violence.

Newsweek found concerning examples of threats of violence or incitement to violence on social media sites including Twitter, Facebook, and Parler.

One Twitter account with more than 168,800 followers—among them Trump's lawyer Rudy Giuliani—shared a tweet using a hashtag linked to the QAnon conspiracy. It was a video of a man in uniform standing in front of a tank as he listed major battles in American history.

"Dear @realDonaldTrump, YOU are the Commander in Chief. We are at war. We see it clearly," the account posted. "The enemy has breached the gates. Patriots of every stripe have your back. We stand at the ready. We fight now or die later. Long live the Republic. Give the order."

On Parler, which has become a haven for conservatives and right-wingers away from what they see as censorship on traditional social media sites, the election result and Trump's lack of progress in the courts has inflamed tempers.

One user threatened: "#WeThePeople want to kill all of you cheating traitors and if justice is not served > go ahead and hold your breath wont take us LONG to get to you and settle the score!"

Another user wrote: "We only have a brief moment to strike and win the Civil war if Biden does steal the election. This will be the day before inauguration. If it doesn't happen then we already lost and can never have a free country again. #civilwar #civilwar2."

These social media platforms are awash with talk of civil war and taking up arms in response to the legitimate election loss. Many talk of awaiting Trump's "order" to take violent action against their political opponents.

One sinister tweet read: "If Biden Steals this election fraudulently, #killdems."

Many of these posts and hashtags achieve little engagement or interaction while a few generate thousands of responses. Much is hollow talk. But as history shows, a small minority of people are also willing to act on their words, with horrific consequences.

The mosque attack in Christchurch, New Zealand, and the synagogue massacre in Pittsburgh, Pennsylvania, are just two recent examples.

Matthew Feldman, the director of the Centre for Analysis of the Radical Right and a professorial fellow at the University of York in the U.K., told Newsweek that online radicalization is a key element in right-wing extremism, for which the post-election landscape is fertile ground.

"In the West generally, you can't be a terrorist—lone wolf or otherwise—if the internet isn't a key part of your so-called terrorist cycle," Feldman said. "The next two months may very well define what we understand Trumpism to be, even if there is a Trumpism."

If the president continues to fan the flames, or even leaves office but begins a grievance tour through areas of high support, the situation could become even more dangerous.

The internet is a priceless resource for terrorists of all ideologies, but holds a special power among English-speaking right-wing radicals, Feldman said.

"This stuff abuts the mainstream in the West, when it comes to white supremacy, radical right stuff. You can go down the rabbit hole of QAnon and then come out on Parler amongst the crazies."

Online radicalization is the key to what Feldman called "the main terrorist threat in the West." Among the radical right, and especially in the U.S., "all of the elements are in place" for it to grow and become more deadly, he said.

Facebook and Twitter have both already taken action against users and groups encouraging violence or spreading conspiracy theories that could incite violence. Both platforms have long been accused of allowing accounts to peddle disinformation and incite hatred or even violence.

They regularly remove users or groups that break platform guidelines. But the sheer number of users and low barrier to entry makes it difficult to catch everyone. The November election forced both companies to take more public action, somewhat belatedly according to critics.

Facebook placed warnings on 180 million pieces of content debunked by fact-checkers between March 1 and Election Day, and removed another 265,000 in the U.S. for breaching its rules on voter interference.

The platform also took action against groups and profiles linked to the QAnon conspiracy theory, which claims that Trump is in a secret war with a group of Satan-worshipping pedophiles that control world governments and run a global child sex-trafficking ring.

Twitter, meanwhile, introduced new rules during the course of the election and continued its effort to remove offending accounts. The platform also began flagging more Trump tweets as misleading when the president regularly made false electoral fraud claims.

But for all their work, it is hard for even the biggest social media companies to fully regulate extremist content. And even if they were technically able, some violent content does not actually violate platforms' terms of service.

It is not difficult for users to couch their threats in more subtle language and avoid repercussions.

Donald Trump, election 2020, right wing, protests
Two armed supporters of President Donald Trump lean on their truck at a 'Stop The Pause and Defeat The Steal' rally in front of the state's capitol on November 21, 2020 in Salem, Oregon. Nathan Howard/Getty Images/Getty

Newsweek sent several examples of threats of violence or incitement to violence on both Twitter and Facebook to the companies for comment.

A Twitter spokesperson told Newsweek that the platform's Civic Integrity Policy "prohibits content meant to incite interference with the election process or with the implementation of election results through violent action."

This is in addition to existing rules that prohibit threats of violence and the glorification of violence on the website.

"Using a combination of technology and human review, we proactively monitor Twitter to identify and mitigate the spread of content that violates our rules, including Tweets that encourage or threaten violence," the spokesperson said.

We also encourage people on Twitter to report content that violates our rules so we can take action.

"We're committed to protecting the health and integrity of the election conversations happening on Twitter, and on and around election day in the U.S., our teams have worked to enforce our rules at scale. We remain vigilant in this work."

Twitter said some of the examples sent—which included a post declaring "a call to arms" against Democrats and one saying "If the democrats want a war , let's have at it then"—did not violate its terms of service.

One of those flagged by Newsweek was labeled as problematic by Twitter.

Facebook removed the posts flagged by Newsweek, which included calls for a second civil war and calls to arms against liberals and Democrats. The #CivilWar2020 tag in particular poses its own problems, as many posts were using the hashtag to discuss other things like video games.

A Facebook company spokesperson told Newsweek: "We're staying vigilant in detecting content that could incite violence during this time of heightened uncertainty. We've readied products and policies in advance of this period so we can take action quickly and according to our plans."

The spokesperson added: "Since August, we've removed 2,400 pages and 14,200 groups maintained by militarized social movements and updated our policies in order to enforce against more threats of violence."

Parler's rapid rise to prominence is reflected in the number of times its app was downloaded—almost 1 million times—in the week after Election Day. It was the most popular free app on the Apple App Store and Google Play Store.

The New York Times reported that Parler added 3.5 million users in November, and the platform claims it now has more than 11 million users.

Jeffrey Wernick, the COO of Parler, told Newsweek that the platform's leadership "take seriously any threats," and that the website's terms of service specifically prohibit threats or incitement to violence.

Wernick said Parler is "very concerned" about some of the posts on the platform, but stressed that it is designed to be a "free open public square" with a wide range of viewpoints.

Its commitment to free speech is paramount, Wernick said, which is why Parler tries not to take action against users for misleading posts, for example. "If the platform is calling you a liar, if the platform has an opinion, then it's no longer a platform," he said. "It's a publisher."

Wernick cited the Section 230 piece of internet legislation which generally provides immunity for website publishers from third-party content: "It's perverse that the intention of 230 was to protect a platform like Parler, and everybody wants 230 protection, but to destroy the platform that 230 was designed to protect."

Parler relies on its community members to police the platform, which differentiates it from more established websites, which Wernick described as "surveillance companies masquerading as social media."

Parler uses a "Community Jury"—which, according to its website, is "composed of volunteer, verified Parleyers, who participate in regular training sessions to address any questions they have about applying the community guidelines." It is still under evaluation for improvement.

Wernick said Parler does not use AI or "algorithmic manipulation," so does not actively "hunt" users who may be violating guidelines: "We want to be people being judged by their peers."

He acknowledged there is extremist content on the platform, but said it was a matter of philosophy and capacity.

Comparing the platform to the U.S. justice system, he said: "Do we feel better that the price of keeping guilty people in jail is maybe having some innocent people in jail? Or would we rather make sure that innocent people never get to jail? So if that means a few guilty people get free, we prefer that."

Wernick said several times that the leadership was concerned about what he called "ugly" content, behavior he argued users had learned from using Twitter and Facebook. But he also downplayed examples provided by Newsweek citing low engagement and circulation.

"So the bottom line is if you did find some people who said some terrible things, but nobody's reading it, then nobody's putting into circulation...So this stuff just doesn't circulate much," he said. "How much harm can it cause if one person puts up something that almost nobody else reads? How is that a threat?"

In isolation, this might be true. But unchecked it becomes a significant trend. "I accept larger viewing figures are more likely to lead to violent outcomes," Feldman said, though cautioned against taking anyone's word for it.

"In my view, the bigger problem is lack of quality moderation on this and cognate smaller platforms," Feldman said. "In some instances, this dovetails with a lack of speedy engagement with requests for content removal—say, by authorities or victim's families.

"That's compounded in some cases—for example, Gab and Parler—by appearing to license, or at least turn a blind eye to, radical right extremism on the platform.

"Once a smaller platform gets known—deliberately or otherwise—for providing a platform to right-wing extremism, the danger of political violence increases greatly."

American law enforcement has used online activity and threats to help prosecute would-be terrorists, be they white supremacist, Islamist, leftist, or otherwise.

Social media and online forums present law enforcement with a wide array of threats, whether genuine expressions of violent intent or angry venting.

An FBI spokesperson declined to comment on specific examples of online extremism or platforms accused of facilitating it.

The spokesperson said the Bureau "works closely with our federal, state, and local partners to identify and stop any potential threats to public safety."

"We gather and analyze intelligence to determine whether individuals might be motivated to take violent action for any reason. We encourage members of the public to remain vigilant and immediately report any suspicious activity to law enforcement," they said.

"We stress that the FBI is focused on individuals who commit violence and criminal activity that constitutes a federal crime or poses a threat to national security.

"The FBI can never initiate an investigation based solely on an individual's race, ethnicity, national origin, religion, or the exercise of First Amendment rights."

Neither the Trump campaign nor the White House replied to Newsweek's requests for comment.

Donald Trump, 2020 election, social media, right
Protesters rally in front of Gov. Kate Brown's residence, Mahonia Hall, on November 21, 2020 in Salem, Oregon. Nathan Howard/Getty Images/Getty