Hackers Use AI to Create First Zero-Day Exploit

▼ Summary
– Google Threat Intelligence Group reported the first known case of cybercriminals using AI to identify and weaponize a zero-day vulnerability to bypass 2FA on a web-based system admin tool.
– The campaign was disrupted before deployment, with GTIG working with the vendor to close the vulnerability.
– Analysis of the AI-generated Python code showed hallmarks like structured docstrings and a hallucinated CVSS score.
– Nation-state groups from China and North Korea have shown significant interest in using AI for vulnerability discovery.
– The most common use of AI by threat actors is for research and troubleshooting, automating intelligence gathering to support complex operations.
For the first time, cybercriminals have successfully used AI to identify and exploit a zero-day vulnerability, according to a new warning from the Google Threat Intelligence Group (GTIG). The finding marks a significant escalation in the role of artificial intelligence in cyberattacks.
Published on May 11, the GTIG AI Threat Tracker report reveals that “prominent” cybercrime threat actors joined forces to orchestrate a large-scale vulnerability exploitation campaign. Investigators believe an AI model was deployed to pinpoint a zero-day flaw and then weaponize it, specifically to bypass two-factor authentication (2FA) protections on a widely used open-source, web-based system administration tool.
GTIG collaborated with the tool’s vendor to patch the vulnerability and disrupt the operation before the exploit could be deployed in the wild. Google emphasized that this represents the first concrete evidence of a threat actor leveraging AI to facilitate both the discovery and weaponization of a zero-day.
Neither Google’s Gemini AI model nor Anthropic’s Mythos were involved in the attack, the report clarifies. However, analysis of the exploit code, written in Python, revealed distinct signs of AI generation. The script featured highly structured educational docstrings and a Pythonic format characteristic of training data used by large language models (LLMs). A hallucinated CVSS score within the code further confirmed its AI origin.
Though this specific campaign was neutralized before it could cause harm, the emergence of an AI-crafted zero-day signals how rapidly the threat landscape is evolving. “There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun,” said John Hultquist, chief analyst at GTIG. “For every zero-day we can trace back to AI, there are probably many more out there.”
The report also highlights how AI is acting as a force multiplier for hackers. Both nation-state actors and cybercriminal groups are increasingly adopting the technology. Google noted that the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea (DPRK) have shown “significant interest” in using AI for vulnerability discovery. Meanwhile, criminal groups are deploying AI to develop malware and create operational support tools that are harder for antivirus software and other defenses to detect.
Despite these advanced uses, the most common application of AI among threat actors mirrors that of regular users: employing LLMs for research and troubleshooting. By automating intelligence gathering and task support, cybercriminals free up time and resources to manage more complex, multi-stage operations. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks,” said Hultquist. “It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology, but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks.”
(Source: Infosecurity Magazine)