AI-Generated Code Security Risks Exposed

▼ Summary
– The Vibe Security Radar project, launched in May 2025 by Georgia Tech, tracks vulnerabilities directly introduced by AI coding tools in public databases.
– In March 2026, at least 35 new CVEs were disclosed as a direct result of AI-generated code, a significant increase from previous months.
– The project’s methodology involves analyzing vulnerability fixes and tracing commits back to their origin, flagging those with AI tool signatures.
– Researchers track about 50 AI-assisted coding tools and have confirmed 74 CVEs directly linked to their use.
– Claude Code appears most frequently in the data largely because it leaves a detectable signature, unlike tools with untraceable inline suggestions.
As software development increasingly relies on AI-assisted coding tools, a new tracking initiative reveals a sharp rise in documented security flaws originating from this technology. Researchers from Georgia Tech’s Systems Software & Security Lab launched the Vibe Security Radar in May 2025 to systematically monitor this trend. Their data shows a concerning acceleration: while six vulnerabilities were linked to AI-generated code in January 2026, that number jumped to 15 in February and then to at least 35 new Common Vulnerabilities and Exposures (CVE) entries in March.
The project scans public security advisories from major databases, including the U. S. National Vulnerability Database and GitHub’s advisory system, to identify flaws that directly result from AI coding tools. Hanqing Zhao, who founded the radar, stresses the need for concrete evidence. He notes that while many claim AI-produced code is insecure, actual tracking was absent. The goal is to move beyond hypothetical risks and quantify real vulnerabilities impacting end users. This effort is especially critical as some developers now push entire vibe-coded projects straight to production, a practice that introduces significant risk.
Zhao’s team investigates approximately 50 different tools, from well-known assistants like GitHub Copilot and Claude Code to newer entrants such as Devin and Amazon Q. Their methodology is meticulous. They first locate the commit that fixed a vulnerability in a public database, then work backward to find the original bug introduction. If that initial commit bears an AI tool’s signature, such as a co-author tag or a bot email address, it is flagged for further analysis. Finally, AI agents perform a deep dive, accessing full Git repositories to understand the root cause and confirm the tool’s contribution.
So far, the radar has confirmed 74 CVEs directly attributable to AI-generated code vulnerabilities. Claude Code from Anthropic appears most frequently in the data, but Zhao clarifies this is partly because the tool consistently leaves an identifiable signature. Other popular assistants, like GitHub Copilot, offer inline suggestions that leave no trace in commit histories, making their contributions harder to detect and attribute. The prevalence of Claude Code-related flaws may also simply reflect its broad adoption within the developer community.
The findings underscore a fundamental challenge in modern software security. Even teams conducting thorough code reviews may struggle to catch every issue when a substantial portion of the codebase is machine-generated. The Vibe Security Radar provides the first clear snapshot of this emerging threat landscape, offering vital data for organizations to assess the real-world security implications of their development tools.
(Source: Infosecurity Magazine)