Defeat Malware Evasion with New Framework

▼ Summary

– Attackers evade machine learning malware detectors by making small, functionality-preserving code changes like swapping API calls or adding junk instructions.
Researchers developed ERDALT, a framework that trains on real adversarial examples and focuses on stable, hard-to-manipulate malware features.
– ERDALT outperforms existing defenses by filtering out fragile features and combining robust ones, reducing evasion without severe performance penalties.
– Real-world malware like TrickBot and Ryuk already use these evasion techniques, such as API hashing and polymorphic code insertion.
– ERDALT is seen as a valuable additional layer in defense strategies but not a complete solution, shifting focus toward assuming attacker manipulation.

Malware creators have perfected the art of deceiving machine learning detection systems through subtle yet effective code modifications, but a new research framework offers a promising countermeasure. Developed by a team from Inria and the CISPA Helmholtz Center for Information Security, this approach specifically targets adversarial evasion techniques, where harmful software is altered just enough to appear harmless to AI models while retaining its destructive functionality.

Traditional antivirus programs rely heavily on known signatures, making them vulnerable to new or modified threats. Machine learning promised a more adaptive solution by recognizing broader behavioral patterns across malware families. However, attackers quickly adapted, using functionality-preserving changes like switching API calls or inserting irrelevant code to trick detectors. These adjustments don’t affect how the malware operates but can completely fool conventional AI models.

Defensive strategies borrowed from fields like image recognition often fall short in this context. In visual domains, adversarial changes are typically small and imperceptible to humans. With malware, attackers aren’t limited by visibility, they can make significant alterations as long as the program still executes. This reality renders many standard robustness techniques ineffective.

Enter ERDALT (Empirically Robust by Design with Adversarial Linear Transformation), a framework built to withstand realistic adversarial manipulation. Rather than assuming minor perturbations, ERDALT trains on actual adversarial examples and prioritizes stable, hard-to-alter features. It identifies which characteristics of malware remain consistent even under common transformations, creating a more resilient detection model.

For instance, an attacker might substitute one system API call for another that performs the same function. While this can make malicious code appear benign to some detectors, ERDALT is designed to recognize such swaps by ignoring fragile features and reinforcing robust ones. This method makes it considerably harder for attackers to evade detection through simple code adjustments.

In testing, ERDALT demonstrated superior performance compared to existing methods like adversarial training or manual feature selection. Although it didn’t completely resolve the tension between accuracy and robustness, it achieved stronger protection without the severe performance costs associated with other approaches.

These evasion tactics aren’t just theoretical. According to Aditya Sood, VP of security engineering and AI strategy at Aryaka, real-world malware families like TrickBot and PlugX have already adopted API hashing and function obfuscation to avoid detection. By storing function names as hashes rather than plain text, these programs complicate reverse engineering and slip past signature-based tools.

Other families, including Mirai variants and Ryuk ransomware, have used polymorphic techniques to insert junk code or benign sections, confusing static analysis without changing the core malicious behavior. Sood views ERDALT as a valuable advancement but emphasizes that it should be part of a layered defense strategy rather than a standalone solution.

This research signals an important shift in how we approach malware detection, anticipating adversarial manipulation rather than hoping it won’t occur. If frameworks like ERDALT can be successfully integrated into practical systems, they may help rebalance the ongoing battle between attackers and defenders, at least for the foreseeable future.

(Source: HelpNet Security)

Topics

machine learning 95% malware detection 92% adversarial examples 90% erdalt framework 88% evasion techniques 87% api substitutions 80% real-world attacks 78% defense strategies 76% robustness testing 75% function obfuscation 72%
Exit mobile version