Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
The AI security conversation has mostly been a future-tense argument. What AI malware might look like. What AI assistant vulnerabilities could enable. Interesting to think about. Hard to act on.
This week moved it to present tense.
IBM X-Force found a new malware strain called Slopoly being deployed by the Hive0163 threat group as part of an active Interlock ransomware campaign. The kicker: code analysis points to AI generation as the authorship method.
Researchers found structural patterns, code quality markers, and stylistic characteristics consistent with LLM output. This isn’t guesswork. It’s code forensics.
Why it matters: most endpoint detection tools are tuned to catch known malware patterns. Human-authored malware families accumulate signatures over time. AI-generated malware can produce functionally equivalent code with totally different structural characteristics. Same capability, different fingerprint. Detection tools that haven’t seen the specific variant might miss it entirely.
This is what everyone was worried about. Attackers using LLMs to crank out novel malware variants, producing them at volume, undermining the signature-matching model that most of the traditional detection stack relies on. Slopoly is proof it’s happening now. Not a hypothetical.
Permiso researchers published something equally alarming: Microsoft Copilot, the AI assistant baked into Microsoft 365, can be hijacked through indirect prompt injection delivered via email.
An attacker sends a specially crafted email. The target doesn’t have to click anything. Doesn’t have to interact with it at all. They just have to receive it. When Copilot processes the email, it follows injected instructions hidden in the content. Potentially exfiltrating email content, triggering credential resets, or taking other actions within whatever permissions the AI assistant holds.
Permiso calls it “Co-Pilot Disengage.” Microsoft hasn’t confirmed a patch. The proof-of-concept is documented and published. If your org has Microsoft 365 Copilot deployed, this is a live exposure right now.
Slopoly is AI as an attacker tool. Lowering the cost of novel malware that defenders haven’t learned to recognize.
The Copilot vulnerability is AI as a defender liability. Deploying AI tools creates attack surface that didn’t exist before. The assistant’s access to email, files, and systems is also accessible to anyone who can manipulate what the AI reads.
Both are real right now. For enterprise security: audit what access your AI assistants have. Apply least-privilege to AI tools the same way you would for human users. And if your defenses rely heavily on signature matching, AI-generated variants are an explicit gap in your coverage.