Why the Fight Against Online Extremism Keeps Failing

WMen, we read on another shooting linked to online hatred, or to another violent network spread over social networks, the common chorus is that “social media platforms must do more”.
Indeed, my own research on online extremism and the moderation of the content show that if content withdrawals have climbed on the main sites in recent years, extremists still find many digital spaces to recruit, organize and call for violence. And so, perhaps the question that we should ask ourselves is not if the platforms are enough isolation – it is if we approach a greater problem that any site can manage.
Our approach to the fight against hatred and online extremism is too often focused on individual platforms – Facebook, X, Youtube or Tiktok – and too little on the fragmentation of moderation of content on the Internet. Historically, when governments examine “large technologies” and platforms tighten their moderation rules, extremist movements disperse in smaller or alternative platforms. Fewer rules and smaller confidence and safety teams mean more opportunities to radicalize a dedicated audience while testing means to withdraw in larger platforms.
Recently, this has become easier, some major platforms serving their content moderation rules under the cover of freedom of expression. Under the property of Elon Musk, for example, X (formerly Twitter) has greatly reduced its confidence and security teams, restored by prohibited extremist accounts and applicable softening of hateful content. Likewise, Meta, who owns Facebook and Instagram, ended her third -party fact verification program and redefined her hate speech policies so that a certain rhetoric once rejected is now authorized. And because these platforms offer the widest scope, extremists find not only access to traditional audiences, but also reintegrate the cycle of radicalization, recruitment and mobilization that small platforms have trouble maintaining.
My research based on extensive multiplatform data sets and case studies on actors through the ideological spectrum show how extremists strengthen resilience in this unequal landscape. Their strategy is both deliberate and dynamic. They use encrypted marginal sites or messaging applications to publish the most incendiary or violent equipment, bypassing the stricter application. Then, they develop “attenuated” messages for the general public platforms-perhaps hateful, but not hateful enough to trigger mass withdrawals. They exploit the resentment of users who feel censored on traditional social media, transforming this grievance into a part of their rallying cry. This cycle thrives in the cracks of what I like to call the “inconsistent application system” – an ecosystem which, inadvertently or not, allows extremists to adapt, escape prohibitions and rebuild on platforms.
But this approach to the play also means that extremist movements are never really dismantled – only temporarily moved. Instead of weakening these networks, he teaches them to evolve, which makes the application future even more difficult.
Find out more: Why online moderation often fails during a conflict
Trying to solve this problem with platform platform repression is like plugging in a single hole in a bucket riddled with leaks. As soon as you replica one, water spills through others. This is why we need a more approach to the ecosystem scale. In certain categories – where the content is almost universally deemed harmful, as explicit calls for violence – more consistent Moderation on several platforms is our best bet.
If the platforms coordinate their standards (and not only in vague declarations but in specific application protocols), this consistency begins to remove the extremists of “arbitration”. Analyzes of 60 platforms show that in places where there is a real political convergence, violent groups find fewer security paradise because they can no longer exploit application gaps to maintain an online presence. When platforms apply similar rules and contact details, extremists have fewer places to regroup and less possibilities to move from one site to another when the prohibitions take effect.
Coordination in this way is not simple – the moderation of the content raises concerns concerning freedom of expression, censorship and potential abuses by governments or private companies. However, for the narrow slice of content that most of us are suitable, it is beyond pale terrorist propaganda and violence of violence and violence to violence – standards ranging from many of the gaping holes.
The creation of robust trust and security capacities is not inexpensive or simple, especially for small platforms that cannot hire hundreds of moderators and legal experts. Enter a new wave of third -party initiatives to do exactly that: Roost, for example, is funded by a coalition of philanthropic foundations and technological companies like Google, Openai and Roblox. It provides open source software and shared databases so that platforms, large or small, can better identify and remove known extremist materials to encourage real damage. Projects like this promise a path to greater convergence, without forcing companies to reinvent moderation from zero.
Of course, some of the largest barriers remain political. We always lack consensus on the place where to draw the line between the harmful extremist discourse and the legitimate political expression. The subject has become deeply polarized, various actors and stakeholders have strongly contrasting opinions on what should be considered harmful. But extremist violence is not a partisan problem: shots from the synagogue to the live violence of Christchurch to a series of attacks inspired by Islamists linked to online radicalization, we have already attended enough atrocities to know that hatred and terror thrive in the frisitions between the platforms.
Yes, we will continue to discuss the limits of harmful content. But most Americans may agree that explicit calls for violence, harassment based on hatred and terrorist propaganda quickly justify and serious intervention. This shared terrain is the place where multiplatform initiatives like Roost, or collaborative databases led by the Global Internet Forum to combat terrorism, can make real progress.
Until we approach the systemic incentives that allow migration, coordination and re-emergence of extremist content on all platforms, we will continue to ask ourselves, after each horrible attack: why does this continue? The answer is that we have built a fragmented system – a system where each platform fights its own battle, while the extremists exploit seams.
It is time to demand not only that “Big Tech does more”, but that all online spaces engage in a more unified position against extremism. It is only then that we can start drying the countless leaks that continue to feed digital hatred.