You can build the most secure AI model in history, but it does not matter if the front door is left unlocked. That is exactly what happened with Anthropic’s new Mythos model this week.

Unauthorized users did not hack Anthropic directly. They just went through the side door. A vendor called Context.ai had a legacy office app with over-privileged access to the Mythos API. When that vendor got hit, the keys to the kingdom went with it.

This is a step beyond just stealing data. Attackers were actually using the model. Developers have been reporting ghost executions, where tasks run on their own and API bills skyrocket for no reason.

The lesson here is simple: your AI stack is a chain of trust. If you authorize a third party tool to touch your models, you are giving them your perimeter. It is time to audit your tokens and start asking your vendors some very uncomfortable questions.


Check out the full breakdown of how the Mythos breach happened and what it means for your AI strategy