Nation-State Hackers Now Using Gemini AI in Attacks

▼ Summary
– A new Google study finds government-backed cyber threat actors now widely use AI, especially for reconnaissance and social engineering.
– Specific groups from Iran, North Korea, and China used AI models like Gemini for tasks such as target profiling and gathering intelligence on vulnerabilities.
– Chinese-backed group APT31 employed AI “expert personas” to automate vulnerability analysis and generate attack plans against U.S. targets.
– In contrast, financially motivated criminal groups are increasingly attempting model extraction attacks to steal and replicate AI capabilities.
– Despite this misuse, Google researchers have not observed direct attacks on the core AI models or generative AI products themselves by these state actors.
A recent investigation reveals that state-sponsored hacking groups are increasingly integrating artificial intelligence into their operations, with a particular focus on reconnaissance and social engineering. The findings, detailed in a new report, show that nation-state actors from Iran, China, and North Korea are leveraging AI tools, including Google’s own Gemini model, to enhance the efficiency and sophistication of their cyber campaigns. This marks a significant shift in the threat landscape, where AI is no longer just a defensive tool but a powerful offensive asset for advanced persistent threats.
The study observed these groups using generative AI for a variety of preparatory tasks. This includes writing and debugging malicious code, gathering detailed intelligence on specific organizations and individuals, and researching known software vulnerabilities to exploit. One Iranian group, known as APT42, was seen using AI models to find official corporate email addresses and investigate potential business partners, allowing them to craft highly convincing pretexts for their attacks.
In a notable development, a North Korean state-backed hacking team, tracked as UNC2970, utilized Google’s Gemini large language model. Their objective was to synthesize open-source intelligence and create detailed profiles of high-value targets, which primarily involved individuals within defense companies. This group often poses as corporate recruiters to initiate contact and gather sensitive information, making their social engineering attempts more credible and difficult to detect.
Similarly, a Chinese-nexus threat actor, tracked under names like Mustang Panda and TEMP.Hex, employed Gemini and other AI platforms. Their activities focused on compiling exhaustive dossiers on specific individuals in Pakistan and collecting operational data on various separatist movements globally. While the AI research did not directly lead to targeting, the group later incorporated similar Pakistani targets into their active campaigns. Google has since taken steps to disrupt this activity by disabling the associated assets.
The report also highlights experimentation with more autonomous AI systems. Another China-linked group, APT31, has been observed using AI agents configured with “expert cybersecurity personas.” These automated tools can analyze software vulnerabilities and generate testing plans for attacks against U.S.-based targets, significantly speeding up the offensive development cycle. For these government-backed operatives, large language models have become indispensable for technical research, precise targeting, and the rapid creation of nuanced phishing lures.
Despite this widespread misuse for planning and intelligence gathering, researchers note that direct attacks on the core AI models themselves have not yet been observed from these nation-state actors. The primary malicious use remains within the supporting phases of an attack chain rather than attempts to corrupt or hijack the foundational AI systems.
A separate but growing trend involves financially motivated cybercriminal circles, who are actively attempting to steal AI models themselves. There has been a marked increase in what are known as model extraction attacks. In these incidents, attackers with legitimate access to a mature machine learning model systematically probe it to extract its underlying information and functional capabilities.
This technique, sometimes called a distillation attack, often employs a process known as knowledge distillation. It allows an attacker to transfer the learned knowledge from one model to create a new, functional replica. The major incentive for criminals is the ability to accelerate their own AI development at a fraction of the typical cost and time, potentially creating powerful tools for fraud, spam generation, or automated hacking. This represents a different vector of AI-related risk, where the technology itself becomes the target of theft rather than merely a tool for attack.
(Source: InfoSecurity Magazine)
