Google AI Detects Malware That Morphs During Attacks

▼ Summary
– Google has detected novel adaptive malware in the wild that uses AI to dynamically alter its behavior during execution.
– This malware employs large language models (LLMs) to generate malicious scripts, obfuscate code, and evade detection, as seen in strains like FRUITSHELL and PROMPTFLUX.
– Threat actors are increasingly using social engineering pretexts in prompts to bypass AI safety guardrails and access restricted data from models like Gemini.
– State-sponsored groups from countries including North Korea, Iran, and China are using AI to enhance reconnaissance, phishing, and command-and-control infrastructure.
– AI-enabled tools and services, such as deepfake generators and phishing kits, are emerging in cybercriminal underground forums, making attacks more adaptive and scalable.
A new generation of malware that can rewrite its own code during an attack has been identified by Google’s Threat Intelligence Group, signaling a dangerous evolution in how cybercriminals are weaponizing artificial intelligence. These adaptive threats leverage large language models to dynamically alter their behavior, evade security systems, and generate malicious scripts in real-time, moving beyond simple phishing or code improvement to create more resilient and unpredictable attacks.
Several novel malware families have been discovered actively exploiting these capabilities. FRUITSHELL is a publicly available reverse shell that uses hard-coded prompts designed to slip past LLM-powered security analysis. PROMPTFLUX, an experimental VBScript dropper, abuses the Google Gemini API to continuously rewrite its own source code. Another experimental strain, PROMPTLOCK, is a Go-based ransomware that employs an LLM to generate and execute harmful scripts on the fly. Meanwhile, PROMPTSTEAL is an active Python data miner that uses AI to craft data-theft prompts, and QUIETVAULT, a JavaScript credential stealer targeting GitHub and NPM tokens, uses AI tools to hunt for additional secrets on compromised systems.
Google researchers emphasize that this represents a new operational phase in AI abuse. Instead of just refining phishing emails or generating basic code, threat actors are now building tools that change their functions mid-execution, making detection and mitigation far more challenging. While some projects appear experimental, they clearly demonstrate a shift toward AI-driven malware that can adapt in real-time to countermeasures.
The report also highlights other significant trends in AI-powered cyber threats. There is a noticeable increase in the use of social engineering tactics within prompts to bypass AI safety protocols. For instance, attackers pretend to be cybersecurity researchers or students in capture-the-flag events to trick models like Gemini into disclosing restricted information. State-sponsored groups from North Korea, Iran, and China are also leveraging AI to improve reconnaissance, phishing campaigns, and command-and-control infrastructure.
Underground criminal markets are evolving too, with a growing availability of AI-enabled tools. These include deepfake and malware generators, phishing kits, reconnaissance utilities, vulnerability exploits, and even technical support services. This commercialization lowers the barrier to entry, allowing less skilled attackers to launch sophisticated campaigns.
Cory Michal, CSO at AppOmni, noted that AI is making modern malware significantly more effective. Attackers are using it to produce smarter code for data extraction, session hijacking, and credential theft, enabling quicker access to identity providers and SaaS platforms where vital data resides. He explained that AI doesn’t just enhance phishing, it makes intrusion, privilege abuse, and session theft more adaptive and scalable. The outcome is a rising wave of AI-augmented attacks that directly challenge enterprise SaaS security, data integrity, and resilience against extortion.
(Source: ZDNET)





