AI Security Guardrails Are Failing Quietly, and Two New Studies Prove It

Claude Code's deny rules silently break after 50 subcommands and Bedrock's guardrails don't cover multi-agent flows by default, proving that AI safety tools work in demos but fail in production.

Read More

RSAC 2026 Day One: Every Vendor Went Agentic, the Government Went Missing

RSAC 2026 opened with a wave of autonomous AI security launches from Google, Microsoft, CrowdStrike, and Wiz. Reportedly absent from the program: CISA, the FBI, and the NSA.

Read More

RSAC 2026 Opens Monday: Here's What the Cybersecurity Industry Will Be Talking About All Week

RSAC 2026 opens Monday at Moscone Center. Agentic AI, human manipulation, and post-breach resilience are the dominant themes -- here's what to watch and why this year feels different.

Read More

Agentic AI Just Had Its First Major Enterprise Data Breach — and the Attacker Was the AI

A Meta AI agent followed its instructions and caused a major internal data leak. Combined with the new OWASP MCP Top 10, this is the clearest real-world picture yet of what agentic AI security failures actually look like.

Read More

AI Agents Have a Security Problem — and It's Not Science Fiction Anymore

AI agents aren't chatbots. They act, execute, and chain decisions on their own. And the security model for most deployments? Basically nonexistent.

Read More

Your AI Automation Platform Is a Backdoor: n8n RCE and a 4-Minute AI Browser Phishing Attack

CISA flagged an actively-exploited RCE in n8n with 24,700 exposed instances. Researchers turned Perplexity's AI browser into a phishing tool in under four minutes. When software acts for you, it can be turned against you.

Read More