Chatbots talk. Agents act. That’s not a marketing distinction. It’s a security one.

A chatbot gives you bad info? Annoying. An agent gets prompt-injected? It can exfiltrate your files, call APIs you didn’t authorize, and modify data with the permissions you handed it on a silver platter. Every document it reads, every web page it visits, every API response it processes is a potential attack vector.

OWASP put prompt injection at the top of their LLM Top 10 for exactly this reason. And the gap between “theoretical risk” and “stuff that’s happening in production” is shrinking fast.

This week, China’s CNCERT flagged security weaknesses in the OpenClaw AI agent platform. Weak defaults, prompt injection exposure, potential data exfiltration. The technical findings aren’t surprising. Secure defaults for agents are genuinely hard to define when the whole point is broad, flexible permissions. But CNCERT is a Chinese government entity, so treat the specifics with appropriate skepticism and verify independently.

The category of risk they’re describing, though? That’s real regardless of the source.

If you’re deploying agents, here’s what actually matters. Minimize permissions. An agent that reads files in one directory doesn’t need write access everywhere. Treat every external input as hostile. Log everything the agent does, immutably, so you can trace what went wrong. And before you go live, map out the worst case. What happens if this agent gets fully compromised? If you can’t stomach the answer, fix the architecture.

The enterprises treating agent deployment as a productivity decision instead of a security architecture decision are going to learn the difference the hard way. The attack surface is real. It’s growing. And the defenses haven’t caught up yet.


Read the full post on gNerdSEC