There’s a phrase I’ve found myself repeating more often lately: everything is a policy decision.
What gets removed from a platform (and what doesn’t). Whether an AI chatbot is tested for Holocaust denial before its launch (or not). Which communities are protected (and which are expected to fend for themselves).
These are not technical accidents or algorithmic glitches. They are the result of choices—by platforms, by governments, and by us. And in 2025, those choices have fueled an online environment where antisemitism and hate aren’t just spreading. They’re mutating, scaling, and colliding with AI in ways we’ve never seen before.
In 2025, the link between digital hate and real-world violence has become impossible to ignore. After the antisemitic terrorist attacks in Sydney, praise for the murders, calls for more killings, and celebration of violence flooded platforms like Facebook and X. This wasn’t a glitch in the system — it was the system.
The convergence of unchecked hate and AI amplification is now the defining challenge for the year ahead.
As we look toward 2026, here are the policy issues that will define what comes next.
1. AI Is Now a Force Multiplier for Hate
Generative AI has drastically lowered the barrier to creating antisemitic and extremist content.
In 2025, AI-generated Holocaust imagery began circulating, depicting fictional scenes like prisoners playing violins in concentration camps. These images went viral, causing trauma for survivors and outrage from historians, yet remained live for hours or days.
Grok, an AI chatbot, began calling itself “MechaHitler,” praising Nazi ideology, and pushing conspiracy theories about Jews. The model had been tuned with minimal oversight, and when prompted, it did exactly what its data suggested: echoed hate.
And it got personal. In Maryland, a school principal was targeted by a deepfake audio clip, his voice was faked to include racist and antisemitic slurs. The clip went viral. He received death threats. The person responsible is now in jail. But no one could undo the damage.
This is not the future of hate. This is the present. And whether platforms respond with action or excuses is, again, a policy choice.
2. Platforms Are Abandoning Automated Moderation When We Need It Most
In January 2025, several tech companies quietly announced they were scaling back AI-based moderation and automated fact-checking. In its place, we got “community-based systems”—like X’s Community Notes and TikTok Footnotes—that rely on unpaid, crowdsourced annotation.
The pitch was that these were “democratic,” empowering users to self-moderate. In reality, they privilege majority narratives, marginalize minority experiences, and move far too slowly to stop viral hate. By the time a note is added, the damage is already done.
This pullback is especially dangerous when paired with the explosion of AI-generated content. By the end of 2026, we may no longer be able to tell what is AI-made and what is not.
Platforms can and must do better. AI can be part of the solution — not just the problem. At the World Jewish Congress Institute for Technology and Human Rights, we tested AI systems against human moderators. The results were striking: in several key areas, including Holocaust denial cloaked in humor or irony, AI outperformed humans. Large language models trained on historical datasets picked up patterns most people missed.
And yet, platforms continue to shift the burden onto users. Detection is now a community job. Safety is now a user responsibility.
This is not a technology failure. It is a governance decision. One that outsources risk and cost from companies to vulnerable communities. I hope in 2026, companies will be more critical of these systems and put additional guardrails in place.
3. Extremist Networks Are Adapting Faster Than Moderation Can
Last year I spoke of the beginning of a shift from moderating individual posts to identifying behavioral patterns of how groups form, radicalize, and organize. While this trend continued, most platforms still struggle to act on the insights.
Meanwhile, extremists thrive in low-regulation spaces like Telegram, Mastodon, and emerging decentralized platforms. They build communities, test narratives, and then re-enter mainstream platforms with polished, platform-compliant hate.
With the lowering of the automatic content moderation of 2025, outright terrorist content, including circulations of ISIS materials, calls for terrorist attacks, and praising of designated dangerous organizations and individuals has been more widespread.
Once again, the policy question is clear: Will platforms invest in detection and enforcement before the harm spreads? Or will they continue to act only once the headlines hit?
4. The Global Fragmentation of Content Governance Is a Gift to Extremists
We now live in a content regulation patchwork.
Governance remains deeply uneven. In the U.S., content moderation continues to erode in the name of free speech. In Europe, the Digital Services Act is in full force, with fines and accountability mechanisms, though current winds are shifting towards revision and loosening the tight regulatory framework. Jewish users now experience vastly different levels of protection depending on where they live — or which platform they use.
Decentralized platforms — where no single company governs the space — make this even harder. Who enforces rules on a server run by volunteers? Who is responsible when hate crosses digital borders?
From a human rights perspective, this fragmentation is a great challenge. The same antisemitic content flagged in France might remain up in Florida. This leads to unequal protection. Whether a Jewish user is safe online depends not on the severity of the content, but on the jurisdiction they're in and the platform they use.
That’s not inevitable. It’s a policy choice.
5. The Core Question for 2026: Who Is Responsible for Safety Online?
Right now, platforms are shifting the burden onto users. But moderation is not a community volunteer project. It is a system that requires infrastructure, training, investment, and accountability.
So here’s what 2026 must deliver:
-
AI systems that are transparent, auditable, and accountable
-
Model training processes that include Jewish communities and other minorities
-
Shared responsibility between platforms, civil society, and regulators—not just “user tools” that paper over the problem
Everything Is a Policy Decision
Whether hate spreads or is stopped. Whether deepfakes go viral or are removed. Whether AI is weaponized—or used to protect.
These outcomes are not accidents. They are the result of conscious choices made by people in power—at companies, in parliaments, and on product teams.
Everything is a policy decision. And in 2026, we can no longer afford to pretend otherwise.
At WJC TecHRI, we believe that the only way forward is shared responsibility. That means putting civil society in the room where decisions are made, embedding human rights into the foundations of emerging tech, and using AI not as an excuse to scale harm, but as a tool to prevent it.