Vibe Coding: The Dark Side of the New Open Source

▼ Summary
– Developers commonly use existing libraries and open source components rather than writing all code from scratch to save time and reduce security risks.
– The rise of “vibe coding” with AI allows for rapid code generation but introduces new security complications and dangers in the software supply chain.
– AI-generated code can be insecure because it may be trained on old, vulnerable, or low-quality software, potentially reintroducing known vulnerabilities and creating new issues.
– Vibe coding produces rough drafts that require human reviewers to spot flaws, and even models trained on specific source code can generate inconsistent outputs, adding complexity.
– A Checkmarx survey found that over a third of organizations generate more than 60% of their code with AI, but few have approved tools, and AI development complicates code ownership tracing.
Vibe coding represents a new frontier in software development, allowing programmers to generate functional code rapidly using artificial intelligence. Much like purchasing pre-made bread instead of milling flour from scratch, this approach saves considerable time and effort. Developers increasingly rely on AI-generated code snippets to build applications rather than writing every component manually. While this method accelerates development cycles, it introduces significant security challenges that complicate software supply chain integrity.
Security specialists point out that this emerging practice creates unseen vulnerabilities. By leveraging vast repositories of existing code, including outdated or flawed software, AI models can inadvertently reproduce historical security weaknesses. Alex Zenla, Edera’s chief technology officer, observes that AI’s grace period for security exemptions is ending. He notes that when AI trains on vulnerable codebases, it regenerates those same flaws alongside novel security issues, creating a cyclical problem.
The fundamental issue with vibe coding lies in its draft-like output. Generated code often lacks the nuanced understanding of specific project requirements and security contexts. Even when companies fine-tune models with proprietary codebases, human reviewers must identify every potential flaw in AI-generated material, a task becoming increasingly difficult as output variability grows.
Eran Kinsbruner from Checkmarx highlights another dimension to this challenge. Identical prompts given to the same language model produce different code variations, creating inconsistency across development teams. This variability introduces complications beyond those seen in traditional open-source dependencies, where code remains relatively static between uses.
Recent research underscores how pervasive this practice has become. A Checkmarx survey revealed that over sixty percent of organizational code now originates from AI generation in many companies. Despite this widespread adoption, only eighteen percent of organizations maintain approved tooling lists for AI-assisted development. The study, conducted across thousands of technical professionals, further highlighted how AI development obscures code ownership and complicates vulnerability tracking. As organizations race to implement these efficient coding methods, the underlying security implications demand careful consideration.
(Source: Wired)