Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
This week gave us three separate AI security stories. They’re not separate. They’re the same story, told three ways.
Langflow is the open-source framework sitting at the center of a lot of enterprise AI pipelines. It connects LLMs, data sources, and automated processes. A lot of organizations that moved fast on AI are running it somewhere.
On March 17, Langflow patched CVE-2026-33017, CVSS score 10.0. Unauthenticated remote code execution via an authentication bypass and code injection through exec(). No credentials needed.
Within 20 hours, attackers were already exploiting it. CISA added it to the KEV list. Federal agencies now have a mandatory remediation deadline.
If you’re running Langflow and haven’t patched, you’ve been exposed for ten days while attackers were moving. Twenty hours from disclosure to active exploitation isn’t an anomaly anymore. It’s the pattern.
Researcher Oren Yomtov at Koi Security found that any website could silently inject prompts into the Claude Chrome extension without any user interaction. Load a malicious page, and the page speaks to Claude on your behalf. Claude does what it’s told.
Anthropic patched it. But the structural problem is bigger than one extension. Every AI assistant embedded in a browser is sitting in the middle of untrusted web content by definition. That’s a hard problem, and Claude’s extension isn’t the only one facing it.
People install AI browser extensions without thinking of them as security decisions. That framing needs to change.
Last September, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to autonomously run a cyber espionage campaign against 30 targets worldwide. The AI handled 80 to 90 percent of the operation. The humans were supervising, not operating.
The kill chain was built around humans doing the work. Humans get tired. They make mistakes. They move at human speed.
An autonomous attacker doesn’t. It runs reconnaissance across dozens of targets simultaneously, adapts mid-operation, doesn’t wait for a handler. The detection windows defenders assume are built on attacker behavior that a largely autonomous system doesn’t have.
None of this is a fluke. AI systems are being deployed faster than security frameworks are adapting. Langflow runs in enterprise environments that haven’t classified it as critical infrastructure. The Claude extension existed in browsers with no way to constrain what websites could say to the AI. The autonomous espionage campaign is what happens when the attack side of AI adoption gets far enough ahead of defense.
Patch Langflow now. Get AI browser extensions into your next security policy review. And if you’re building detection strategy, the autonomous attacker scenario isn’t hypothetical anymore.