AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI Fuels Cybercrime’s ‘Fifth Wave’ of Attacks

▼ Summary

– Group-IB’s report identifies a “fifth wave” of cybercrime, characterized by the weaponization of AI to make attacks cheaper, faster, and more scalable.
– Dark web forums have seen a massive spike in discussions about AI-powered criminal tools, with deepfake and synthetic identity services available for very low monthly subscriptions.
– AI is transforming phishing through “agentized” kits that use AI models to automate campaign creation, victim targeting, and personalized lure distribution.
– Cybercriminals are developing sophisticated proprietary “dark LLMs,” which are fine-tuned for malicious tasks like generating scams or malware and have no ethical restrictions.
– These readily available AI tools lower the barrier to entry for cybercrime, enabling less skilled attackers to execute convincing and scalable malicious campaigns.

The landscape of digital crime is undergoing a seismic shift, driven by the widespread availability of artificial intelligence. AI is powering a “fifth wave” in the evolution of cybercrime, offering inexpensive, ready-made malicious tools enabling sophisticated attacks. This new era transforms human criminal skills into scalable services, making illegal activities cheaper, faster, and more accessible than ever before. The evidence for this transformation is stark, with discussions about AI-powered criminal tools on dark web forums skyrocketing from below 50,000 messages annually to roughly 300,000 each year since 2023.

This surge in activity reflects a booming underground market. Security analysts now routinely find “synthetic identity kits” for sale, bundling AI-generated video actors, cloned voices, and biometric datasets for as little as five dollars. Similarly, deepfake-as-a-service platforms advertise subscriptions starting at ten dollars monthly. This content can be used to lure trusting people to execute tasks or to bypass authentication processes and know your customer (KYC) systems to gain access to devices, steal money or steal data. One of the most alarming applications involves creating fake synthetic content that convincingly impersonates real individuals. While these fakes may not fool everyone, criminals find them lucrative enough if they succeed in just five to ten percent of attempts.

The phishing industry has been revolutionized, entering what experts call the agentic AI era. Phishing kits are now listed at prices ranging “from as little as a Netflix subscription to $200 per month, making them accessible and affordable to groups big and small.” The innovation goes beyond simply crafting more believable emails. Modern AI capabilities are automating the entire attack chain. Criminals are embedding open-weight models into their tools to handle tasks that once required manual configuration, like managing victim lists and crafting personalized narratives for lures.

A particularly advanced service demonstrates this automation by using AI agents to develop lures, send phishing emails, and return feedback to the criminals, allowing campaigns to adapt in real-time. From the victim’s perspective, each malicious email feels personal and unique, as the phishing kit’s agent continuously generates new content. This represents a move from static toolkits to dynamic, self-optimizing attack systems.

Beyond using public chatbots for mischief, threat actors are developing their own proprietary systems. Analysts have tracked the rise of “dark large language models” (LLMs), which are more stable, capable, and entirely free of ethical restrictions. These tools have evolved from early, rudimentary experiments into custom-built, self-hosted models fine-tuned on scam linguistics and malicious code. They serve as powerful assistants for a range of criminal activities, from generating fraud content for romance or investment scams to crafting phishing kits and even aiding in malware development and vulnerability discovery.

The market for these dark LLMs is already established, with at least three active vendors identified. They offer subscriptions costing between thirty and two hundred dollars per month and boast a collective customer base exceeding one thousand users. This commercialization of weaponized AI signifies a profound change, turning advanced cyber capabilities into a cheap commodity available to anyone with malicious intent.

(Source: InfoSecurity Magazine)

Topics

ai cybercrime 95% phishing kits 90% deepfake services 90% dark llms 90% Agentic AI 85% weaponized ai 85% threat actor sophistication 85% dark web forums 85% cybercrime evolution 80% cybercrime marketplaces 80%