Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
One week after Anthropic’s Mythos model sent Treasury officials and bank CEOs into the same room, IBM’s chief commercial officer published a piece arguing that AI at infrastructure scale can’t stay gated and opaque. The argument isn’t new. What changed is that it now has a deadline.
The EU AI Act goes into full enforcement on August 2, 2026. Four months from now.
Rob Thomas’s piece is structured as a historical pattern argument. Every technology that transitions from product to foundational infrastructure follows the same arc. When other systems start depending on it, the rules change. His specific claim on security: “At infrastructure scale, security improves more often through scrutiny than through concealment. That is the enduring lesson of open source software.”
The countermodel he’s implicitly targeting is Project Glasswing. Roughly 50 organizations, selected by Anthropic, got early access to a model the U.S. government described as capable of exploiting vulnerabilities across every major operating system and browser. No external oversight structure. No audit mechanism. No public criteria for who gets in.
You can make a reasonable case for controlled early access. Letting defenders study the capability before adversaries get equivalent reach is not obviously wrong. But “one company decides who’s ready” is a strange governance structure for something that just triggered an emergency meeting at the U.S. Treasury.
IBM isn’t a neutral party here. watsonx runs on open-weight models. An industry norm favoring open and inspectable AI would benefit IBM commercially. Thomas doesn’t hide this, but the piece doesn’t dwell on it either. That’s worth being explicit about.
The counterargument worth taking seriously: a model that can autonomously generate working exploits may have some defensive value precisely because adversaries can’t fully study it. Thomas’s response would be that security through obscurity has a poor track record. TLS didn’t get weaker when its specs went public.
Whether that holds for a model that can find and exploit vulnerabilities autonomously is more open than his piece acknowledges. But the EU AI Act doesn’t need to resolve the philosophical debate. It goes into effect in August. High-risk AI systems, including AI used in critical infrastructure, will face mandatory transparency and auditability requirements. A model sitting behind a private access program with no external audit mechanism is going to face hard questions under that framework.
IBM just got FedRAMP authorization for 11 watsonx tools. The timing isn’t accidental.