How Identifying Fake News Could Save Lives

Quebec mosque shooting vigil
People stand with a peaceful sign near the Quebec City Islamic cultural center after a shooting occurred in the mosque on Sainte-Foy Street in Quebec City on January 29. In the wake of the shootings, misinformation and inaccurate reports abounded on social media. ALICE CHICHE/AFP/Getty

On Sunday evening six people were shot dead and 19 more injured while praying in one of Quebec’s largest mosques. Canadian Prime Minister Justin Trudeau, who had just announced that Canada would welcome refugees in response to the Trump administration’s immigration ban, was quick to call the incident an act of terror.

The Quebec City mosque shooting cost the lives of six and the sense of security of thousands of citizens, but has also jeopardized the trust in digital media of millions of people.

For anyone tracking the aftermath of the attack online, it became evident that the circulation of misinformation and premature speculation had reached a whole new level. The undisputed winners in this situation are extremists who have been able to thrive in uncertain environments that offer them complete freedom to shape the public narrative.

In the case of the Quebec attack, the high levels of confusion and contradictory information immediately after the shootings allowed alt-right bloggers and activists to construct a story that would validate their narrative and enable them to push their political agenda. In the attack’s aftermath, they hijacked trending hashtags such as #QuebecAttack to condemn what many called “Muslim terrorism.”

Shortly after 8 p.m. local time on Sunday evening, when the news broke that a mosque had been attacked in the Canadian city, a Reuters parody social-media account quickly misidentified the perpetrators as white nationalists David M. J. Aurine and Mathieu Fournier. This led the Daily Beast to wrongly report the identity of the two arrested men. As the investigations were ongoing and Twitter dynamics unfolding, it was like reading a crime novel—only one where the reader couldn't be sure whether the clues they were given throughout the book were hoaxes or not. The amount of speculative posts on Facebook even led the Quebec City Mosque to urge its followers “to wait for the preliminary results of the investigations and stop spreading rumours.”

Later on Sunday night, the story changed, and quite dramatically so: one of the alleged perpetrators was identified as a Muslim Moroccan, Mohamed Belkhadir. Far-right tabloids and bloggers reacted instantly. Alt-right commentator Pamela Geller tweeted about a “shooting by Muslim gunmen,” with many of her 133,000 followers mimicking her.

But Monday evening saw another twist to the story when Belkhadir was released, turning the Arab man from presumed perpetrator into witness and victim of negative press. The “Allahu Akbar” that reportedly echoed through the walls of the mosque was most likely either a victim’s last expression of faith or the mocking war cry by the actual perpetrator, who was neither Moroccan nor Muslim.

But it was too late—the “Allahu Akhbar” outcry that was anonymously reported to Radio-Canada had already given birth to another wave of fake news. Despite Belkhadir’s release, it took a letter from the office of Canadian Prime Minister Justin Trudeau to prompt Fox News, one of the United States’ biggest news networks, to remove an erroneous tweet that said the one suspect in custody was of Moroccan origin.

In the event that the 27-year-old far-right supporter Alexandre Bissonnette, a French-Canadian student later charged with the fatal shooting, is found guilty, all evidence would point to an ideologically inspired act of terrorism: The Quebec City Mosque makes for a highly symbolic crime scene and the innocent praying Muslims for a strongly suggestive soft target.

The remnants of his social media profile that lie buried in the depths of the digital data cemetery expose his sympathy for French far-right leader Marine Le Pen. A local refugee rights group expressed its grievance over the authorities’ failure to stop Bissonnette, whom they knew held white supremacist views.

But what could have been done to prevent the attack? If Bissonette was indeed the perpetrator, his case points to undeniable connections between dehumanizing political rhetoric, radicalization and violent escalation. His profile, choice of target and timing for the attack would then reveal strong motivational parallels to the far-right shooter Anders Breivik, who was responsible for Norway’s most lethal terrorist attack in 2011. In both cases political figures may have helped provide the ideological frameworks and binary narratives that led to violent extremism. Verbal degradation and demonization of ethnic and religious minorities reinforce a narrative according to which a war between Muslims and non-Muslims appears inevitable.

But explicit, dehumanizing rhetoric is not the only source of the problem. As George Orwell argued in his essay Politics and the English Language, inaccuracy in language used by public figures may add to this dangerous dynamic by “making it easier for us to have foolish thoughts.” The failure of many media outlets to use the label “terrorist attack” consistently across forms of deliberate, politically-inspired violence—whether far-right or Islamist—has not been helpful either. The lack of linguistic precision and consistency has not only distorted the public perception of both the extent and nature of terrorist attacks we face. It has also effectively caused our minds to make the spontaneous association of the terms “Muslim” and “terrorism.”

The first step to prevent future attacks is therefore to equip the internet’s 3.5 billion digital citizens with stronger critical thinking and digital literacy skills. Bottom-up and top-down efforts of blurring the lines between what is true and what is false and have created confusion about who we can rely on for accurate and independent reporting. Even millennials, who are often considered to be digital natives, are often unable to identify what’s really credible on the internet. Enabling them to identify trolling accounts and fake news and to question unreliable information sources may just save trust in fact-based journalism. And perhaps even, somehow, save lives.

Julia Ebner is a policy analyst at Quilliam, a U.K.-based counter-extremism think tank, where her research focuses on far-right extremism and reciprocal radicalization.