Google: AI-Powered Malware Is Now in Active Use

▼ Summary
– Google has discovered AI-powered malware that uses large language models to dynamically generate malicious scripts and evade detection.
– Two malware families identified are PromptFlux, which rewrites its code using Google Gemini API, and PromptSteal, which mines data using Qwen2.5-Coder-32B-Instruct.
– The report highlights additional AI-enabled malware like FruitShell, PromptLock, and QuietVault that use LLMs for various malicious functions.
– Google warns that the cybercrime market for AI tools is rapidly developing, with nation state actors misusing chatbots for all attack stages.
– Security experts recommend behavioral detection methods over signature-based approaches to counter AI malware’s adaptive capabilities.
Google has identified a new category of AI-driven malware actively using large language models to dynamically create malicious scripts and bypass security measures. A recent report from the Google Threat Intelligence Group details two malware families, PromptFlux and PromptSteal, which employ what the company calls “just-in-time AI.” These tools generate harmful scripts on the fly, obscure their own code to avoid detection, and leverage AI models to produce malicious functions as needed, rather than embedding them permanently. While still in early stages, this development marks a substantial move toward more self-sufficient and flexible malware.
PromptFlux, a dropper written in VBScript, regenerates itself by accessing the Google Gemini API. It instructs the large language model to rewrite its source code in real time, then saves the disguised version to the Startup folder to maintain persistence. According to GTIG, this malware also attempts to propagate by duplicating itself onto removable drives and mapped network shares.
PromptSteal, a Python-based data miner, queries the LLM Qwen2.5-Coder-32B-Instruct to produce single-line Windows commands. These commands gather information and documents from specific directories and transmit the collected data to a command-and-control server. GTIG has observed PromptSteal being utilized by the Russian group APT28 in Ukraine, while PromptFlux remains under development.
The report also highlights several other AI-enabled malware families. FruitShell is a PowerShell reverse shell that establishes remote command-and-control connections and allows execution of commands on targeted systems, using hard-coded prompts to evade LLM-based security. PromptLock, written in Go, is ransomware that uses an LLM to dynamically generate malicious Lua scripts during runtime for reconnaissance, data encryption, and exfiltration. QuietVault, a JavaScript credential stealer, employs an AI prompt and locally installed AI command-line tools to search for and extract secrets.
Google cautioned that the cybercrime market for AI tools is advancing quickly, pointing to multiple offerings of versatile tools designed to aid phishing, malware development, and vulnerability research. This trend could further democratize cybercrime. The report also noted ongoing attempts to circumvent guardrails in Gemini by using prompts that resemble social engineering tactics. Additionally, GTIG warned that nation-state actors are misusing chatbots to assist in every phase of their attacks, from reconnaissance and crafting phishing lures to developing command-and-control infrastructure and exfiltrating data.
Cory Michal, Chief Security Officer at AppOmni, stated that the GTIG findings align with what his company observes in the SaaS threat environment. He emphasized that AI-enabled malware alters its code, rendering traditional signature-based detection useless. Defenders require behavioral endpoint detection and response that focuses on the actions of malware rather than its appearance. Detection should center on unusual process creation, scripting activity, or unexpected outbound traffic, especially to AI APIs like Gemini, Hugging Face, or OpenAI. By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can identify when attackers are exploiting AI and intervene before data is stolen.
Max Gannon, cyber intelligence team manager at Cofense, expressed concern over the use of AI at every stage of the attack lifecycle. He noted that this represents a major shift from the previous year, when AI was used sparingly, mainly for phishing emails and kits. Gannon anticipates that in the near future, innovative threat actors will market all-inclusive AI-based kits capable of generating every component of an attack chain, requiring zero technical knowledge, making the subscription fee the only barrier to entry.
(Source: Info Security)