Skip to main content

Out of the shadows

How Concordians are exposing online extremism
April 18, 2022
|
By Alexander Huls


 

The infamous events of January 6, 2021 — when more than 2,000 Donald Trump supporters stormed the United States Capitol — marked the culmination of several related phenomena: acute polarization, the sway of conspiracy theorists and the incitement of violence by unsavoury political actors.

The attack in Washington, D.C., was also the climax of efforts by far-right extremist groups like the Proud Boys, the Oath Keepers and the Three Percenters to radicalize those sympathetic to their causes.

Extremist propaganda has been pervasive online since the advent of the internet, with groups as disparate as the Islamic State and citizen militias leveraging platforms — from primitive bulletin-board systems to social-media networks — to advance their ideological goals.

“What is evident among them is that they all exploit affordances of various platforms to plan events, recruit, finance and communicate,” says Yasmin Jiwani, a professor of communication studies and the Concordia University Research Chair in Intersectionality, Violence and Resistance.

Those efforts have been accelerated and rewarded of late — the COVID-19 pandemic, notably, has been used by many as an excuse to capitalize on and exploit anti-government and anti-science sentiment.

For researchers at Concordia who study online extremism, the question is: What can be done about it?

Who are the online extremists?

When the threat of online extremism was discussed roughly a decade ago, the conversation typically centred on the Islamic State’s use of social media to convert and recruit. While such extremists have hardly gone away, far-right extremists most concern many observers these days.

Joan Donovan, BA 06, MA 08 | Photo: Shorenstein Center on Media, Politics and Public Policy

These groups — whether white-supremacist, anti-government, incel (“involuntary celibate”), military or all of the above — have felt empowered to come out of the shadows.

“For many years, most white supremacists were hiding who they were online,” says Joan Donovan, BA 06, MA 08, a research director at the Shorenstein Centre on Media, Politics and Public Policy at Harvard University’s John F. Kennedy School of Government.

“Not anymore. It’s no longer the case that there’s only a few active groups.”

In 2020, an Institute for Strategic Dialogue (ISD) report identified more than 6,000 right-wing extremist channels, pages, groups and accounts across different social-media platforms. Right-wing content on Facebook increased by 33.7 per cent last year, and on 4chan by 66.5 per cent.

Over the last five years, far-right extremist activity online has more than doubled. The Center for Strategic and International Studies, a D.C. think tank, also found that 90 per cent of domestic attacks and plots in the United States in early 2020 were conducted by right-wing extremists. That was up by two-thirds from 2019.

Even without overt acts of terrorism, right-wing extremists have had a profound impact on society as ideological polarization has spread worldwide. Online harassment, doxxing and death threats have targeted individuals within the government and beyond.

Conspiracy-theory groups — such as QAnon, with “Stop the Steal” — and anti-vaccine movements have been fuelled by misinformation and disinformation online. This has resulted in significant events like the January 6 Capitol attack and the disruptive trucker convoy in Canada, and has prompted researchers to explore the root causes of these movements online.

Vivek Venkatesh, MA 03, PhD 08

It’s an onerous task, says Vivek Venkatesh, MA 03, PhD 08, UNESCO Co-Chair in Prevention of Radicalization and Violent Extremism and professor of Inclusive Practices in Visual Arts in the Department of Art Education.

“We understand that discrimination, xenophobia and bigotry exist and that these are precursors to violent forms of extremism. Yet we’re unable to bring to bear the instruments that are at our disposal — whether they’re legal, political, social, financial or even cultural — to begin to understand why these issues persist.”

How online extremists operate

“The resiliency of these groups is tied into recruitment and retention,” says Donovan.

Online extremists wield the internet to recruit, radicalize, disperse fabricated information and coordinate. They do so primarily through social-media platforms like Facebook, Reddit, Twitter and YouTube as well as niche alternatives like Gab, 4chan and 8kun.

Successful recruitment often occurs when vulnerable targets are manipulated through offers of sympathy and friendship.

“What we’re seeing now in terms of online extremism and the mobilization of more far-right populism is that they are able to channel people’s emotions,” says Venkatesh, whose work includes Project SOMEONE (SOcial Media EducatiON Every day), an online multimedia platform devoted to the reduction of hate and violent extremism.

Recruitment, however, can also occur through disinformation — false information designed to mislead.

Yasmin Jiwani

“There is something that is referred to in the literature as ‘subversive exposure,’” says Jiwani. “This is the circulation of disinformation, coded and cloaked language, memes and more. These are slow-working and subversive in the sense that they make people who are inclined towards right-wing extremism entertain the possibility and factuality of the disinformation.” 

The rate and volume at which disinformation can be produced is significant. For example, the ISD report identified 2,467 right-wing extremist active accounts, channels and pages that yielded more than 3.2 million pieces of content.

“You can live in this environment, this media ecosystem, full-time,” says Donovan. “Some people do. We refer to that effect as ‘the Rabbit Hole.’”

Within that rabbit hole, individuals are not only subjected to the same information over and over again, but across multiple platforms.

That information can be highly weaponized. A recent report by the RAND Corporation think tank revealed that two-thirds of white supremacists and Islamic extremists interviewed felt they were radicalized by online propaganda.

The consequences can be devastating. Lone actors like the Pittsburgh synagogue shooter in 2018, the El Paso Walmart shooter in 2019 and the Christchurch, New Zealand, mosque shooter in 2019, were all consumers of right-wing hate speech. The 2017 Quebec City mosque shooter was also indoctrinated online by alt-right conspiracy-mongers.

The radicalization of these four men led to the deaths of 91 people and the injury of 88 more. Coordinated group attacks have also been incited, such as the plot to kidnap and execute Michigan Governor Gretchen Whitmer, and the efforts to subvert the results of the 2020 U.S. presidential election.

Aphrodite Salas, MA 99

Right-wing extremists also benefit from the ability of misinformation — the sharing of misleading information — to convert people to adopt more extremist views.

The COVID-19 pandemic has exacerbated the problem. According to Statistics Canada, 41 per cent of Canadians spent more time online throughout 2020 and 2021.

“The pandemic really created an online ecosystem that was just ready to amplify misinformation, disinformation and extremism,” says Aphrodite Salas, MA 99, a Department of Journalism assistant professor and trainer for the Journalists for Human Rights Misinformation Project.

The cocktail of inflated screen time combined with anger over lockdowns, vaccine mandates and false claims of election fraud has only complicated matters. The result?

“The pandemic has led to growing engagement with extremist material online,” says Salas. 

Given the recent occupation of Ottawa by anti-vaccine, anti-government truckers (fomented by extremist groups online) and the distinct possibility of another Donald Trump run for president in 2024, observers like Salas are concerned that the fight has just begun.

The battle against online extremism

Part of the challenge of combatting online extremism is that researchers, journalists, activists, politicians and more were all ill-prepared to respond to the threat.

“We’ve been reacting to everything,” says Kyle Matthews, executive director at Concordia’s Montreal Institute for Genocide and Human Rights Studies. “When you’re reacting to everything, you’re always behind the curve. You have to get ahead and try to bend the curve so that it’s less harmful.”

Kyle Matthews

Several countermeasures have emerged. Nations like the United States, for example, have recently dedicated funds ($77 million, in the Department of Homeland Security’s case) to better respond to domestic terrorism, which it designated a national priority.

Globally, there are initiatives like the Global Network on Extremism and Technology, the European Union Internet Forum and the Global Internet Forum to Counter Terrorism, among others. Support for more academic research on online extremism — specifically on how to prevent and address it — has increased as well.

That’s critical, says Donovan. “Knowledge is expensive. You can’t just make knowledge out of experience or perception. You have to test your assumptions, collect data and analyze it in a way that is objective.”

Adds Matthews: “There’s also a lot of pressure on social-media companies right now to do more to take extremism offline.”

Tech firms have responded (how sufficiently is a contested point) by deplatforming certain content, individuals and groups. This method, however, is only effective in the short term, observers point out. When a far-right extremist group is removed, it typically pops up elsewhere or gets replaced by another one.

Since 2020, Facebook has banned more than 11,000 groups and nearly 51,000 individuals associated with extremism. And yet, the 2021 Institute for Strategic Dialogue report found that year-to-year there is often no decrease in the number of Facebook and YouTube channels dedicated to, for example, right-wing extremism.

“It’s a bit like Wack-A-Mole,” admits Salas. “Facebook or Twitter might remove or restrict a group, but then they migrate to the fringes or get replaced by another group.”

Many deplatformed users simply find a new — and more receptive — home on fringe platforms that have more permissive content-moderation policies. The use of encrypted messaging services like Telegram can also provide cover.

The ripple effect is that toxic behaviour increases, as does radicalization, among users who move to under-moderated — and under-monitored — platforms.

Removal and moderation also doesn’t address a more deeply rooted problem.

“There’s a wider discussion we need to have about algorithms,” says Matthews. “Sometimes users go down a rabbit hole of suggested videos to watch that are full of false information and they become radicalized.”

As a result, adds Matthews, policymakers are more and more focused on how and why certain extremist content lands in people’s feeds. Preventive efforts could include wholesale audits of social-media algorithms. That, however, will require powerful policies to overcome what will likely be fierce resistance from tech executives.

Preventive, not reactive, measures

A noble pursuit that is often proposed is to simply deradicalize online extremists. This is not easily achieved.

“It’s not like deleting a software on your computer,” says Matthews. “You’re dealing with people who form ideas that become part of their identity.”

As a result, deradicalization is unlikely to be prompted by outsiders. Studies have shown that life changes — new jobs, new relationships, new experiences — are much more likely to instigate transformation.

Deradicalization is also reactive. Many experts think preventive measures offer the greatest potential for change — with one cited more than any others.

“You really need to focus on digital literacy and education to build resilience in this new ecosystem, especially among young people,” says Salas.

When media literacy is advanced, people become less susceptible to misinformation or disinformation that could send them down a dangerous path.

“We have to start ramping up our education system with digital-literacy skills, responsible citizenship, the ability to think critically about these things,” urges Matthews. “If we don’t, we’re just going to fall further behind.”

Venkatesh says the goal is to teach “people to think about the validity of their sources and then also have people go back to the root articles and the empirical data.”

That can provide an effective shield.

“You’re on guard about what you’re seeing and thinking about instead of just being a passive participant in whatever is happening online,” says Salas. “It’s a long and slow process but it’s important because it will allow people to develop resilience.”

Cataloguing and addressing online extremism can create a sense of despondency. But, as Venkatesh notes, “change won’t happen overnight.”

Now that governments, the tech industry, policy institutes, academia and, indeed, the public are more actively focused on the problem, there is ample reason to be hopeful.

“It’s easy to get discouraged,” admits Salas. “But if you give up, where are you at?”



Back to top

© Concordia University