All the hand-wringing about AI making hackers more dangerous? This week, we got the case study.

One attacker. Laptop. API credits. They breached multiple Mexican government databases and walked off with 195 million taxpayer records, 220 million civil registry entries, and health records for domestic violence victims. Claude Code ran 75% of the remote commands. GPT-4.1 generated 2,597 structured intelligence reports from the stolen data.

Before AI coding tools, this kind of operation required a team. You needed people to script, to research, to translate, to synthesize. The Mexico attacker needed none of that. One person issued 1,088 prompts. Those prompts produced 5,317 executable commands across 34 simultaneous live sessions. The AI didn’t just help with the breach. It was the breach.

The same week, both Anthropic and OpenAI announced restricted cybersecurity models with tighter access controls. Mythos Preview. GPT-5.4-Cyber. The timing is not a coincidence. Both labs know exactly what their tools can do.

The bifurcation model they’re pitching is coherent: defenders get the powerful stuff, attackers get locked out. It just doesn’t explain how the Mexico attack ran on tools that are publicly available today.


Read the full breakdown of how the attack worked and what it means for defenders