Three stories. Three countries. Three different harms. The same conclusion in each.

AI Forensics documented six weeks and 2.8 million messages across sixteen Telegram communities. What they found was a functioning commercial market for intimate partner surveillance: remote phone access, account compromise tools, location tracking sold by the session. Pricing listed. Reviews available. All in Italy and Spain, where EU law applies, where Telegram operates under lighter obligations because it doesn’t hit the 45-million-user threshold that would trigger stricter moderation requirements.

Meanwhile, Amazon filed 1.1 million CSAM reports to NCMEC in 2025. Zero contained location information. Zero contained suspect information. One million reports that law enforcement cannot act on. Technically compliant. Investigatively worthless.

And UK regulator Ofcom warned platform executives this week that AI nudification tools create personal criminal liability under the Online Safety Act, not just fines. The immediate context: Grok generated millions of non-consensual intimate images before xAI took action.

The word that connects all three is “eventually.” Telegram removes content eventually. Amazon reports it. Platforms act on nudification eventually. The evidence base for the gap between claimed and actual moderation is now too extensive and too consistent to keep treating as unintentional.


The full picture on what all three cases show about where platform accountability is actually heading