AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

Google Warns of New AI-Powered Malware Threat

▼ Summary

– Google has identified adversaries using AI to deploy new malware families that integrate large language models during execution, enabling dynamic self-modification.
– The experimental PromptFlux malware dropper uses Google’s Gemini LLM to generate obfuscated code for evading antivirus software and spreading via removable drives and network shares.
– Other AI-powered malware examples include FruitShell for remote command-and-control and QuietVault for credential theft, all leveraging AI to enhance capabilities.
– Threat actors from China, Iran, and North Korea have abused Gemini for tasks like vulnerability discovery, malware development, phishing, and data analysis, leading Google to disable accounts and reinforce safeguards.
– Underground forums show growing interest in AI-powered cybercrime tools that lower the technical bar for attacks, with Google emphasizing the need for responsible AI design and strong safety guardrails.

Google’s security experts are raising the alarm about a dangerous new breed of malware that uses artificial intelligence to rewrite its own code on the fly, creating a moving target for traditional cybersecurity defenses. This development marks a significant escalation in the capabilities available to cybercriminals, allowing for unprecedented levels of adaptation and evasion.

The technique, which Google calls “just-in-time” self-modification, is exemplified by two specific threats: the experimental PromptFlux malware dropper and the PromptSteal data miner. These programs demonstrate a capacity for dynamic script generation, sophisticated code obfuscation, and the creation of functions only when needed. The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software. This module essentially turns the malware into an ever-evolving “metamorphic script” that can continuously alter its own signature to avoid detection.

PromptFlux operates as a VBScript dropper, attempting to maintain persistence on a system by adding itself to the Startup folder. It then seeks to spread laterally to any connected removable drives and mapped network shares. While Google could not definitively link PromptFlux to a specific group, the tactics used suggest a financially motivated actor was behind it. Although the malware was in an early stage of development and not yet capable of causing significant harm, Google proactively disabled its access to the Gemini API and deleted all related assets.

Other AI-powered threats identified this year include FruitShell, a publicly available PowerShell tool that establishes remote command-and-control access. This malware contains hard-coded prompts specifically designed to slip past security analysis powered by large language models. Another tool, QuietVault, is a JavaScript credential stealer that targets tokens from platforms like GitHub and NPM, exfiltrating the stolen data to dynamically created public repositories. It also uses on-host AI command-line tools to search for and steal additional secrets. The list also includes PromptLock, an experimental ransomware that uses Lua scripts to target and encrypt data on Windows, macOS, and Linux systems.

Beyond these tools, Google’s report documents numerous instances where threat actors directly abused the Gemini AI model throughout their attack cycles. A China-linked actor posed as a participant in a cybersecurity game to bypass safety filters, using the model to find software vulnerabilities, craft convincing phishing lures, and build data exfiltration tools. Iranian hackers from the MuddyCoast group pretended to be students to get help with malware development and debugging, a ploy that accidentally exposed their own command-and-control domains and keys.

Other state-sponsored groups also leveraged the AI. Iran’s APT42 used it for phishing, data analysis, and creating a tool that converted natural language into database queries for mining personal information. China’s APT41 sought code assistance to enhance its command-and-control framework and utilize obfuscation libraries, thereby increasing the sophistication of its malware. North Korean groups were also active, with Masan using Gemini for cryptocurrency theft and creating multilingual phishing campaigns, while Pukchong employed it to develop code targeting edge devices and web browsers. In every identified case, Google disabled the associated accounts and used the intelligence to reinforce its model safeguards.

The interest in malicious AI tools is also booming on underground forums in both English and Russian-speaking communities. Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows. These tools are lowering the technical barrier for launching complex attacks, making advanced cybercrime more accessible. The offerings are diverse, covering everything from deepfake generation and image creation to malware development, phishing, reconnaissance, and vulnerability exploitation.

As this illicit market matures, AI-powered services are beginning to replace the conventional tools used in malicious operations. Google has observed multiple actors advertising multifunctional toolkits that can handle various stages of an attack. The push toward AI-based services is aggressive, with developers often promoting new features in free versions and charging higher prices for API access and dedicated support.

Google emphasizes that the development of AI must be approached with both boldness and responsibility. The company states that AI systems must be built with strong safety guardrails from the ground up to prevent and disrupt misuse. It actively investigates any signs of abuse linked to its services, including activities connected to government-backed threat actors. Beyond collaborating with law enforcement, Google is applying the lessons learned from these adversarial encounters to continuously improve the safety and security of its own AI models.

(Source: Bleeping Computer)

Topics

ai malware 95% llm abuse 92% threat intelligence 90% threat actors 89% dynamic modification 88% malware families 87% code obfuscation 85% api abuse 83% cybercrime tools 82% ai safety 81%