UK NCSC Chief Calls for Secure Vibe Coding at RSAC

▼ Summary
– The head of the UK’s NCSC advocates using AI-assisted software development (vibe coding) to reduce collective vulnerability to cyber-attacks.
– He states this opportunity requires rapid development of safeguards for AI code-generation tools to be a net positive for security.
– The NCSC’s CTO published core security principles, or “commandments,” for securing vibe coding, such as integrating secure-by-default practices.
– These principles include a “trust but verify” approach for model provenance and using AI to perform automated code reviews and audits.
– The CTO emphasized the need to implement these guardrails now and highlighted AI’s potential to help secure legacy applications and automate security tasks.
Speaking at the RSA Conference in San Francisco, the head of the UK’s National Cyber Security Centre outlined a critical challenge and opportunity for the industry. Richard Horne argued that the widespread adoption of AI-assisted software development, often called vibe coding, must be harnessed to fundamentally improve software security. He cautioned, however, that this potential can only be realized if the rapid development of code-generation tools is matched by equally swift progress in building vibe coding safeguards. Without these controls, the technology could inadvertently amplify risks instead of reducing them.
Horne urged security professionals to actively “seize the disruptive vibe coding opportunity.” He acknowledged the clear attraction of tools that can disrupt a status quo of manually produced, consistently vulnerable software. The goal, he explained, is to move toward a future where well-trained AI tooling writes secure by design code, transforming cybersecurity outcomes. “The AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities,” Horne stated, framing this as a prerequisite for the technology to become a net positive for security.
In a related blog post published the same day, the NCSC’s Chief Technology Officer for architecture, David C, expanded on this vision. He noted that while AI-generated code currently carries intolerable risks for many organizations, it offers glimpses of a new paradigm. This approach allows experienced developers to massively increase their productivity, and the compelling business benefits will inevitably drive adoption higher. The CTO’s central argument is that security teams must engage with the associated risks immediately, embedding core security principles to make resulting software less vulnerable.
He proposed a set of foundational commandments for securing this new development model. First, secure by default coding practices must be integrated directly into the AI models themselves, ensuring they generate safe, hardened code from the start. Organizations should adopt a ‘trust but verify’ approach, demanding provable model provenance to guard against malicious backdoors. The review process itself should be augmented by AI-powered code reviews that audit all code, whether human-written or machine-generated, scanning for vulnerabilities.
Further safeguards include implementing deterministic guardrails, which are strict, rule-based controls to limit what any piece of code can do, even if it becomes compromised. Secure hosting platforms must be built to sandbox and protect against bad code regardless of its origin. Finally, organizations should automate security hygiene, allowing AI to handle documentation, testing, fuzzing, and threat modeling for every software component.
The NCSC’s CTO stressed the urgency of acting on these principles now, “without waiting five years for the vibe future.” He provided concrete examples, such as using AI to harden the hosting or code of a legacy, even end-of-life, critical application. This would pay down significant technical and security debt. AI could also assist with securing coding practices at every scale, from maintaining an application’s allow-list of permitted URLs to rewriting critical components in a memory-safe language or a more secure framework.
Looking ahead, he envisaged a possible future where AI code ends up far more restricted and locked down by default than the best on-premises or SaaS product. “Ironically, it may even present a solution to organizations still worried about the old concerns with cloud services, who have avoided migrating in all these years,” he added, suggesting that properly governed AI development could offer a new path to robust, secure software infrastructure.
(Source: Infosecurity Magazine)