Several countermeasures have emerged. Nations like the United States, for example, have recently dedicated funds ($77 million, in the Department of Homeland Security’s case) to better respond to domestic terrorism, which it designated a national priority.
Globally, there are initiatives like the Global Network on Extremism and Technology, the European Union Internet Forum and the Global Internet Forum to Counter Terrorism, among others. Support for more academic research on online extremism — specifically on how to prevent and address it — has increased as well.
That’s critical, says Donovan. “Knowledge is expensive. You can’t just make knowledge out of experience or perception. You have to test your assumptions, collect data and analyze it in a way that is objective.”
Adds Matthews: “There’s also a lot of pressure on social-media companies right now to do more to take extremism offline.”
Tech firms have responded (how sufficiently is a contested point) by deplatforming certain content, individuals and groups. This method, however, is only effective in the short term, observers point out. When a far-right extremist group is removed, it typically pops up elsewhere or gets replaced by another one.
Since 2020, Facebook has banned more than 11,000 groups and nearly 51,000 individuals associated with extremism. And yet, the 2021 Institute for Strategic Dialogue report found that year-to-year there is often no decrease in the number of Facebook and YouTube channels dedicated to, for example, right-wing extremism.
“It’s a bit like Wack-A-Mole,” admits Salas. “Facebook or Twitter might remove or restrict a group, but then they migrate to the fringes or get replaced by another group.”
Many deplatformed users simply find a new — and more receptive — home on fringe platforms that have more permissive content-moderation policies. The use of encrypted messaging services like Telegram can also provide cover.
The ripple effect is that toxic behaviour increases, as does radicalization, among users who move to under-moderated — and under-monitored — platforms.
Removal and moderation also doesn’t address a more deeply rooted problem.
“There’s a wider discussion we need to have about algorithms,” says Matthews. “Sometimes users go down a rabbit hole of suggested videos to watch that are full of false information and they become radicalized.”
As a result, adds Matthews, policymakers are more and more focused on how and why certain extremist content lands in people’s feeds. Preventive efforts could include wholesale audits of social-media algorithms. That, however, will require powerful policies to overcome what will likely be fierce resistance from tech executives.
Preventive, not reactive, measures
A noble pursuit that is often proposed is to simply deradicalize online extremists. This is not easily achieved.
“It’s not like deleting a software on your computer,” says Matthews. “You’re dealing with people who form ideas that become part of their identity.”
As a result, deradicalization is unlikely to be prompted by outsiders. Studies have shown that life changes — new jobs, new relationships, new experiences — are much more likely to instigate transformation.
Deradicalization is also reactive. Many experts think preventive measures offer the greatest potential for change — with one cited more than any others.
“You really need to focus on digital literacy and education to build resilience in this new ecosystem, especially among young people,” says Salas.
When media literacy is advanced, people become less susceptible to misinformation or disinformation that could send them down a dangerous path.
“We have to start ramping up our education system with digital-literacy skills, responsible citizenship, the ability to think critically about these things,” urges Matthews. “If we don’t, we’re just going to fall further behind.”
Venkatesh says the goal is to teach “people to think about the validity of their sources and then also have people go back to the root articles and the empirical data.”
That can provide an effective shield.
“You’re on guard about what you’re seeing and thinking about instead of just being a passive participant in whatever is happening online,” says Salas. “It’s a long and slow process but it’s important because it will allow people to develop resilience.”
Cataloguing and addressing online extremism can create a sense of despondency. But, as Venkatesh notes, “change won’t happen overnight.”
Now that governments, the tech industry, policy institutes, academia and, indeed, the public are more actively focused on the problem, there is ample reason to be hopeful.
“It’s easy to get discouraged,” admits Salas. “But if you give up, where are you at?”