Artificial IntelligenceCybersecurityNewswireTechnology

Google Finds Malware Using AI to Evade Detection

▼ Summary

– AI-powered malware like PromptLock is no longer isolated, with Google reporting attackers now deploying LLM-based malware to evade security systems.
– Google has observed active AI-powered malware including QuietVault (credential stealer), PromptSteal (data miner), and FruitShell (reverse shell) in the wild.
– Experimental malware such as PromptLock and PromptFlux use LLMs to dynamically generate malicious scripts and rewrite their own code to avoid detection.
– Underground marketplaces offer various illicit AI tools that cybercriminals use to enhance their skills and operations across all attack stages.
– State-sponsored threat actors from China and Iran have successfully misused Google’s Gemini AI for malicious purposes by bypassing its safeguards through deception.

Cybersecurity professionals are confronting a new breed of digital threat, as malicious software now actively employs artificial intelligence to bypass security measures and operate with greater autonomy. Google’s latest threat intelligence report reveals that attackers are moving beyond experimental AI-powered malware and are deploying operational tools in the wild, marking a significant escalation in how adversaries weaponize this technology.

This development represents a clear step toward more independent and adaptive malware. Analysts have identified several active threats leveraging large language models. One example, QuietVault, functions as a credential stealer targeting GitHub and NPM tokens. It goes a step further by using an AI prompt and command-line tools installed on the host machine to hunt for additional secrets to steal.

Another tool, PromptSteal, has been linked to the Russian group APT28, also known as Fancy Bear. This data miner utilizes the Hugging Face API to communicate with the Qwen2.5-Coder-32B-Instruct model. It prompts the AI to generate concise, single-line Windows commands, which it then executes to gather and exfiltrate sensitive information from compromised systems.

A third piece of malware, FruitShell, is a reverse shell that comes with pre-written prompts specifically designed to trick and evade security systems that are themselves powered by LLMs. Alongside these, the previously identified PromptLock and a dropper called PromptFlux are considered more experimental. The former uses an LLM to create and run malicious Lua scripts dynamically during operation, while the latter uses the Google Gemini API to rewrite its own source code every hour as a method to avoid detection.

Google’s analysts emphasized the seriousness of this shift, stating that adversaries have moved past using AI merely for productivity. They are now deploying novel, AI-enabled malware in active campaigns. This signals a new operational phase in the abuse of artificial intelligence, involving tools that can change their behavior in the middle of an attack.

The underground cybercrime economy has quickly adapted to this trend. Marketplaces catering to threat actors now feature a robust selection of illicit AI services advertising various capabilities. These tools are being marketed as a way for criminals to enhance their skills and operational efficiency.

Similar to previous reports from other AI firms, Google detailed how different threat actors have misused its own LLM, Gemini, to boost their productivity across all stages of an attack. A China-nexus actor misused the model to craft convincing phishing lures, build technical infrastructure, and develop tools for stealing data. An Iranian state-sponsored group used it to research methods for creating custom malware.

In both cases, the threat actors managed to circumvent Gemini’s built-in safety protocols designed to refuse such requests. The China-linked actor pretended to be a participant in a capture-the-flag cybersecurity exercise, while the Iranian actor posed as a student working on an academic project. Google has since stated it has reinforced its protections against these specific social engineering techniques.

Security experts note that threat actors are persistently adapting generative AI tools to augment their ongoing operations. They aim to refine their tactics, techniques, and procedures to act more quickly and at a larger scale. For highly skilled attackers, these AI tools provide a valuable framework, much like established penetration testing tools such as Metasploit or Cobalt Strike are used in real-world cyber threats.

Perhaps more concerning is that these tools also empower less technically proficient threat actors. They can now develop sophisticated tooling, rapidly integrate established attack methods, and improve the overall effectiveness of their campaigns, regardless of their technical expertise or language skills.

(Source: HelpNet Security)

Topics

ai malware 95% threat intelligence 90% llm abuse 90% cyber threat trends 85% evasion techniques 85% ransomware development 85% ai tools misuse 85% data exfiltration 80% credential theft 80% ai productivity 80%