Microsoft Uncovers AI-Powered Phishing Scam

▼ Summary
– Cybercriminals are increasingly using AI-powered tools and large language models to create sophisticated phishing emails, deepfakes, and malware.
– A recent attack campaign used an SVG file disguised as a PDF to redirect users to a fake CAPTCHA page and a credential-harvesting site.
– The attackers hid the malicious payload by padding the file with invisible, business-related terms like “revenue” and “operations” instead of using traditional encryption.
– Microsoft’s Security Copilot identified the code as likely AI-generated due to artifacts like overly descriptive variable names and an over-engineered structure.
– The use of LLMs for obfuscation can introduce detectable synthetic artifacts, which may paradoxically make some attacks easier to identify.
The digital threat landscape is witnessing a dangerous evolution as cybercriminals harness artificial intelligence to craft highly sophisticated phishing scams. These malicious actors are now employing large language models to generate flawless, convincing emails, create deepfakes, and build deceptive online personas and websites. A particularly alarming development involves the use of AI-powered coding assistants to automate nearly every stage of a data extortion attack, signaling a significant shift in attacker capabilities.
Microsoft’s Threat Intelligence team recently identified and neutralized one such campaign that relied on an LLM to disguise a malicious payload. The attack began with messages sent from a compromised small business email account, prompting recipients to view a supposedly shared file. While presented as a standard PDF, the file was actually a Scalable Vector Graphics (SVG) file, a format frequently exploited by attackers due to its text-based nature and ability to embed dynamic content like JavaScript.
What made this attack particularly novel was its obfuscation method. Instead of traditional encryption, the attackers hid the malicious code within a long list of common business terminology. The SVG file was padded with invisible elements for a fake “Business Performance Dashboard,” complete with chart bars and month labels that were rendered transparent. Within a hidden attribute, the attackers concatenated words like “revenue,” “operations,” and “shares.” Embedded JavaScript then processed these terms, systematically converting sequences of business words into specific characters or instructions to reconstruct the malicious functionality.
The final payload was designed to fingerprint the victim’s browser and system. If the conditions were deemed suitable, it would redirect the user to a credential-harvesting phishing page disguised as a CAPTCHA prompt. This multi-layered deception made the file appear harmless to both users and some automated security scans.
Microsoft utilized its own AI cybersecurity tool, Security Copilot, to analyze the file’s composition. The analysis revealed several telltale signs of LLM-generated code, including overly descriptive variable names, an unnecessarily complex code structure, and formulaic obfuscation techniques. These synthetic artifacts led analysts to conclude the code was likely machine-generated. Ironically, while AI empowers attackers with new tools, the very nature of LLM-generated code can leave behind distinctive fingerprints. These unique signatures can, in turn, become valuable new detection signals for defenders.
This incident underscores a critical duality in modern cybersecurity: the same advanced technologies that fuel more potent threats can also provide the keys to identifying and stopping them. The ongoing battle between attack and defense is increasingly being shaped by artificial intelligence.
(Source: HelpNet Security)