Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
AI-led scanning just found 38 critical flaws in OpenEMR in a single pass. That is months of human research, automated. If you are still relying on a 30-day patch window, your math is officially broken.
Read More
A Claude-powered agent deleted an entire production database in 9 seconds. Here's why it happened and what it means for anyone using AI coding tools.
Read More
New tools like MCPwned and Sable are giving red teamers (and attackers) the ability to inject prompts, audit MCP handshakes, and evade AI SOCs. The attack surface for AI systems is wide open.
Read More
The White House has officially flagged 'adversarial distillation' as a major threat. China is using tens of thousands of fake accounts to clone U.S. AI capabilities by strip-mining model outputs. This is model theft through the front door.
Read More
High-end AI is quietly becoming a national security asset for the Pentagon while scammers use the same tech to automate the social engineering cycle for ordinary users.
Read More
Unauthorized access to Anthropic's Mythos model via a compromised OAuth app exposes the real security threat in the agentic AI era: third-party integrations that inherit trust they haven't earned.
Read More
Central banks are panicking over unreleased AI models while hackers are already using them to backdoor Hugging Face and close $100k crypto heists. The weaponized AI era is officially here.
Read More
Attackers deepfaked a CFO on a live Zoom call and walked away with $25.6M. Detection tools get it wrong half the time. Here's what actually works.
Read More
OpenAI and Anthropic have shipped purpose-built cybersecurity AI with reduced safety restrictions. The era of licensed digital weapons isn't coming. It arrived.
Read More
A threat actor used Claude Code and GPT-4.1 to automate a government-scale data breach in Mexico, exfiltrating 415 million records through 5,317 AI-generated commands. This is the first documented case of AI coding tools used as a nation-state espionage engine.
Read More
Anthropic launched Project Glasswing. Stanford showed AI agents solve security problems 93% of the time. A separate analysis of 216 million findings showed critical risk is up 400%. And 67% of CISOs can't see where AI is running in their own environments. All today.
Read More
Treasury Secretary Bessent and Fed Chair Powell held an emergency summit with bank CEOs over Anthropic's Mythos AI. Then major banks quietly got private access to it through Project Glasswing. The government's response is the story.
Read More
IBM's chief commercial officer argues AI at infrastructure scale must be open and inspectable. With the EU AI Act going into full enforcement in August and Anthropic's Mythos still behind a private access program, this governance debate has a hard date.
Read More
CVE-2026-34197 sat undetected in Apache ActiveMQ for 13 years. Claude found it in 10 minutes by tracing a cross-subsystem exploit chain no human auditor had connected.
Read More
Flowise has a perfect 10.0 CVSS under active exploitation. GrafanaGhost injects prompts through metric names. The attack surface isn't the AI model. It's everything around it.
Read More
Researchers find 63 MCP servers with hidden Unicode characters in tool descriptions, and GPT-5.4 follows the invisible instructions with 100% compliance.
Read More
Microsoft telemetry shows AI-assisted phishing lures hit a 54% click-through rate versus 12% for traditional campaigns, a 450% increase that breaks conventional security awareness training.
Read More
Claude Code's deny rules silently break after 50 subcommands and Bedrock's guardrails don't cover multi-agent flows by default, proving that AI safety tools work in demos but fail in production.
Read More
Threat actors turned Anthropic's leaked source into a Vidar infostealer campaign within 24 hours. Then Anthropic's DMCA response nuked 8,100 innocent repos.
Read MoreA researcher used Claude to find file-open RCEs in both Vim and Emacs. Vim patched immediately. Emacs says it's Git's problem. Meanwhile, leaked details of Anthropic's 'Mythos' model suggest AI offensive capabilities are approaching nation-state level.
Read More
The supply-chain group that poisoned Trivy last week just hit LiteLLM and the Telnyx SDK, hid their payload in WAV audio files, and announced a ransomware affiliate partnership.
Read More
CrowdStrike, Wiz, Proofpoint, Arctic Wolf, and GreyNoise all launched agentic AI products at RSAC 2026 -- here's an honest scorecard of what's shipping versus what's still a roadmap.
Read More
The UK's NCSC called AI-generated code an 'intolerable risk,' researchers found all seven major MCP clients vulnerable to attack, and 35 CVEs in March alone traced directly back to AI-written code.
Read More
AI now solves every major CAPTCHA type faster and more reliably than humans, commercial solving services sell API access for fractions of a cent, and the two-decade era of 'click the fire hydrant' is over.
Read More
Kevin Mandia called the next two years a 'perfect storm for offense' at RSAC 2026, and the evidence landed the same week.
Read More
A CVSS 10.0 flaw in Langflow was exploited within 20 hours. The Claude Chrome extension let any website hijack your AI assistant. And a state-sponsored actor used autonomous AI to run 80-90% of a cyber espionage campaign. Three stories, one picture.
Read More
Team Mirai won 11 seats in Japan's House of Representatives using AI for constituent engagement at scale. Bruce Schneier calls it a reason for optimism. The harder question is what happens when less idealistic actors use the same playbook.
Read More
NCSC CEO Dr. Richard Horne told RSAC 2026 that vibe coding is moving fast enough to reshape the SaaS industry, and the security community has a narrow window to shape how it lands instead of cleaning up after it.
Read More
Michael Smith pleaded guilty to generating hundreds of thousands of AI songs and faking $8 million in streaming royalties via bot accounts -- the first major criminal case for AI content fraud, and almost certainly not the last.
Read More
A Meta AI agent followed its instructions and caused a major internal data leak. Combined with the new OWASP MCP Top 10, this is the clearest real-world picture yet of what agentic AI security failures actually look like.
Read More
Interpol says AI-powered criminals are 4.5x more profitable. iProov says consumers can no longer trust what they see online. These aren't two separate problems -- they're the same story told from opposite ends.
Read More
Court depositions describe DOGE staffers using ChatGPT to flag humanities grants as DEI and terminate them -- no domain experts, no review, just a chatbot and a spreadsheet deciding $100 million in funding.
Read More
A North Carolina musician pleaded guilty to collecting millions in fraudulent streaming royalties using AI-generated music and bot accounts. The scam worked for years. That's the part worth understanding.
Read MoreRapid exploitation plus cross-platform AI exposure means next-sprint patching is no longer a safe operating model.
Read More
Enterprise AI security now requires two disciplines at once: policy-level governance for agents and hard application security work in the toolchain beneath them.
Read More
Unit 42 on agent risk, Cloudflare on data-locality controls, and the ICML enforcement controversy all point to the same thing: governance only counts when it's technically enforceable and organizationally defended.
Read More
AI agents aren't chatbots. They act, execute, and chain decisions on their own. And the security model for most deployments? Basically nonexistent.
Read More
The EU Council wants to ban AI nudification tools outright, not regulate them. Criminal-tier penalties, extraterritorial reach, and a standard that global platforms can't ignore.
Read More
Slopoly is AI-generated malware used in a live ransomware attack. Microsoft Copilot can be hijacked through emails you just receive. AI security isn't future-tense anymore.
Read MoreMCP protocol flaws, a 38-researcher red team exercise, and LLM-powered deanonymization all landed the same week. AI agent security isn't a future problem. It's a right now problem.
Read More