Kevin Mandia doesn’t do hyperbole. So when he stood at RSAC 2026 and said the next two years would be “insane,” people listened.

His argument: AI finds vulnerabilities exponentially faster than defenders can remediate them. Exploits are trailing AI bug-finding by six to twelve months. He called it a perfect storm for offense. And then he sharpened it: when open-source models catch up to the leading US labs, elite vulnerability research won’t require a nation-state budget. It’ll need a laptop and curiosity.

Alex Stamos and Morgan Adamski were on the same panel. Different backgrounds, same general conclusion.

The warning didn’t age well for even a week. Vulnerabilities dropped the same week in LangChain and LangGraph, two AI frameworks embedded in production systems across thousands of enterprises right now. The flaw class: file disclosure and path traversal. API keys, credentials, conversation history. Not theoretical. The kind of data attackers are actually after.

This followed Langflow hitting CISA’s Known Exploited Vulnerabilities list the previous week. The pattern is consistent. AI tooling ships fast. Security review doesn’t keep pace.

OpenAI also launched a bug bounty that covers prompt injection, safety bypasses, and jailbreaks, treating model safety failures as a security research problem for the first time. That’s a meaningful shift in posture. The model itself is part of the attack surface, and they’re now officially inviting researchers to prove it.

Three separate things. One coherent signal.

The window Mandia described isn’t coming. It’s open.


What Mandia, Stamos, and the LangChain disclosures are telling us about the AI threat window