When the head of a national cyber agency tells security professionals to get ahead of something instead of reacting to it, pay attention. That doesn’t happen often.

Dr. Richard Horne, CEO of the UK’s NCSC, took the RSAC 2026 stage and made that argument about vibe coding. Not a warning. A challenge. The security community has “both the opportunity and responsibility to help shape that future,” he said, and the window is right now.

Vibe coding is writing software through natural language prompts instead of code. You describe what you want, the AI writes it. Cursor and GitHub Copilot Agent are the main venues. It’s fast, it’s accessible to non-developers, and it produces working software quickly. That’s why it’s spreading.

The security problem is real: code that “just works” isn’t the same as code that’s secure. Traditional development builds a mental model. Developers understand what they wrote. Reviewers can follow the logic. That shared understanding is where security intuition lives. Vibe coding skips it. The AI generates hundreds of lines optimized for “does it run,” not “is it safe.” Nobody fully owns the code.

The NCSC’s position is that AI-generated code currently poses “intolerable risks for many organisations” but shows “glimpses of a new paradigm.” Horne’s ask: engage with AI tool vendors early, at the design and training stage. Push for security-aware model objectives. Don’t just audit the output; shape what the tools are optimizing for.

The security community is used to arriving after the fact. Horne is saying that’s a choice, not a requirement.


The full argument, including the NCSC’s companion blog on AI replacing SaaS and what “shaping the paradigm” actually requires, is in the complete post.