Artificial IntelligenceCybersecurityNewswireTechnology

Infostealers Fuel Rise of Agentic Attack Chains

▼ Summary

– Cybercriminals in 2025 heavily automated their operations, creating systems for entire intrusion cycles with minimal human input, as detailed in Flashpoint’s 2026 report.
– Criminal interest in AI surged, with discussions focusing on weaponizing it for deepfakes, phishing, and malware, though widespread operational deployment faces integration challenges.
– Stolen credentials from infostealers are now the primary attack entry point, with 3.3 billion credentials traded and session cookies used to bypass traditional defenses.
– Vulnerability exploitation windows are shrinking, with mass exploitation occurring within hours, while the potential expiration of the CVE program contract adds systemic risk.
– Ransomware groups increasingly target people through social engineering and insider recruitment, shifting toward identity-driven extortion models that begin with legitimate access.

The digital threat landscape underwent a significant transformation in 2025, marked by a decisive shift toward automation and interconnected criminal operations. Cybercriminals are increasingly building systems capable of running entire attack cycles with minimal human intervention, creating a dangerous environment where stolen identities, unpatched software flaws, and extortion schemes feed off one another. This analysis, drawn from direct monitoring of criminal forums and underground services, reveals a security challenge defined by speed, scale, and a relentless focus on human identity.

Artificial intelligence has evolved from a novel tool into a core component of criminal infrastructure. Over the course of the year, discussions about weaponizing AI on illicit platforms skyrocketed, culminating in a staggering 1,500% increase in activity by December. Threat actors are actively experimenting with AI to automate target research, craft convincing phishing messages, and test vast troves of stolen login data. While building fully autonomous attack chains presents technical hurdles, the direction is clear. As one intelligence expert notes, AI is dramatically accelerating the execution of existing criminal tactics.

This rush to adopt new technology also creates fresh risks for defenders. Organizations are integrating AI tools and APIs into their environments faster than they can evaluate the security implications. Specific novel attacks now target these AI workflows directly, including tactics like “slopsquatting,” where malicious code packages trick AI assistants, and “steganographic prompting,” which hides malicious instructions within an AI model itself. The exploitation of a vulnerability in the Langflow platform to build the Flodrix botnet demonstrates how quickly criminals can weaponize new AI-centric tools.

The primary gateway for attackers is no longer a cleverly crafted exploit, but a stolen username and password. Infostealer malware infected over 11 million devices last year, harvesting a staggering 3.3 billion credentials, cookies, and personal records that are freely traded online. Attackers use these stolen session cookies to simply log in as legitimate users, effortlessly bypassing traditional security perimeters. The most affected countries include India, Brazil, Indonesia, and the United States. While law enforcement actions disrupted major infostealer operations like Lumma, the market quickly adapted, with Vidar 2.0 emerging as the new top threat by early 2026. The potential combination of these credential stockpiles with automated AI systems poses a severe risk, enabling attackers to test logins against thousands of corporate services simultaneously.

The window of opportunity to patch critical software flaws is collapsing. Last year saw over 44,500 new vulnerabilities disclosed, with nearly 15,000 having exploit code publicly available. High-severity flaws are now being mass-exploited within hours of discovery, forcing extremely short remediation deadlines. Compounding this problem is systemic risk within the public vulnerability ecosystem. The potential instability of primary reference databases could leave organizations blind to new threats, unable to prioritize fixes effectively, and exposed for longer periods precisely when attackers are moving fastest.

Ransomware activity surged by 53% in 2025, with groups increasingly targeting people rather than just technological systems. Over 87% of attacks were linked to Ransomware-as-a-Service groups, with the United States being the most victimized country due to the high perceived value of its data. The manufacturing, technology, and healthcare sectors were hit hardest. A clear trend is the move toward extortion models based on social engineering and insider access. Criminal forums are rife with recruitment posts that function like job listings, with actors seeking individuals with specific administrative access to corporate VPNs, help desks, or cloud panels. Documented cases include bribed military contractors and compromised employees at cybersecurity firms. This shift underscores a simple criminal calculus: it is often faster to recruit or trick a person with legitimate access than to hack through a robust security system.

The collective data across these threat categories points to common defensive weaknesses: an overreliance on outdated intelligence, poor visibility into criminal markets, and security architectures that cannot keep pace with automated attacks. To defend against this evolving landscape, organizations must broaden their focus. This includes monitoring for compromised employee credentials, tracking dark web discussions about their partners, and moving beyond basic vulnerability lists to incorporate data on real-world exploitation. Defending against infostealers requires enriching raw log data with context, and managing AI risks means using automation to support, not replace, human expertise. The organizations that will succeed are those that recognize modern attacks are automated, identity-centric, and ruthlessly efficient.

(Source: HelpNet Security)

Topics

ai automation 95% cybercrime trends 93% stolen credentials 92% infostealer malware 90% ransomware attacks 88% vulnerability exploitation 87% ai security risks 86% threat intelligence 85% identity-driven extortion 83% insider recruitment 82%