2026: HackGPT and Vibe Hacking Emerge as Top AI Threats

▼ Summary
– Cybercriminals on the dark web view AI primarily as a shortcut to make money, lowering the barrier to entry by reducing the need for deep technical skills or experience.
– They have adopted a “vibe hacking” philosophy, where following AI-generated intuition and trusting confident-sounding outputs is prioritized over mastering tools or systems.
– A market for “AI jailbreaking” services has emerged, where techniques to bypass AI safety controls are openly traded and sold as a commodity.
– Underground tools branded as “Hacking GPTs” are marketed to novice criminals, promising to automate attacks like phishing and guide users, thereby selling confidence more than sophisticated technology.
– AI is enabling more polished and convincing fraud at scale, such as fluent phishing emails, which expands the pool of potential victims by making scams harder to recognize.
Across underground forums and encrypted messaging apps, a quiet revolution is reshaping cybercrime. Hackers are no longer debating the philosophical implications of artificial intelligence. Instead, they are embracing it as the ultimate shortcut, a tool that promises to democratize digital crime by removing the need for deep technical skill. This shift is creating a new generation of threats centered on accessibility and psychological manipulation, fundamentally altering the risk landscape for organizations worldwide.
The conversation in these shadowy spaces is strikingly pragmatic. AI is not viewed as a revolutionary marvel but as a form of reassurance. It serves as proof that you no longer require years of experience or intricate knowledge to launch effective attacks. You just need the right tool and the confidence to trust it. This sentiment is perfectly captured in messages aimed at novices, framing cybercrime as an accessible venture rather than an elite craft.
This mindset has given rise to a concept known as “vibe hacking.” Borrowing from the legitimate tech world’s “vibe coding,” where developers use AI to generate code from simple descriptions, vibe hacking represents a philosophy. It’s the belief that successful intrusion is more about intuition guided by AI than about mastering complex systems. If an AI model sounds confident in its output, the prevailing attitude is that the result must be good enough to use. This philosophy reframes hacking from a skilled trade into a simple, iterative process, dramatically lowering the perceived barrier to entry.
Naturally, AI service providers implement safeguards to block the generation of malicious content. However, in the criminal underground, these restrictions are seen as minor hurdles. Bypassing these guardrails, often called AI jailbreaking, has itself become a lucrative commodity. Techniques for evading safety filters are openly packaged, traded, and sold. Dedicated channels on platforms like Telegram exist solely to market step-by-step jailbreak methods, turning ethical limitations into another service for sale.
Accompanying this mentality is a wave of malicious tools branded as AI copilots for crime. Names like FraudGPT, WormGPT, and similar variants circulate freely. These tools are advertised with bold promises: the ability to craft convincing phishing emails in seconds, generate scam scripts, explain vulnerabilities in plain language, and guide users through attacks step-by-step. The core marketing message is irresistible to a novice: you don’t need to know how it works; the AI will tell you what to do. While many of these tools are simply language models wrapped around pre-written prompts, their technical simplicity is irrelevant. Their power lies in how they make users feel: capable, confident, and ready to act.
Interestingly, the crimes being sold haven’t fundamentally changed. Underground marketplaces still traffic in the familiar staples of email hacking, social media takeovers, and credential theft. What has transformed is the marketing language. Sellers now emphasize ease and automation over technical expertise. Terms like “AI-powered” or “AI-assisted” act as a modern seal of approval, often attached to services identical to those offered years before the AI boom. This rebranding is less about technological innovation and more about psychological appeal, making criminal services feel safer and more accessible to a broader, less skilled audience.
This shift reveals the true target demographic for these AI-branded services: first-time fraudsters, low-skill actors, and individuals intimidated by traditional hacking. The ads are filled with phrases like “no experience needed” and “AI handles everything.” This model mirrors the growth strategy of phishing-as-a-service platforms, aiming to expand the criminal ecosystem by systematically removing fear and friction. The most significant change driven by AI may not be technical but psychological, scaling crime by boosting confidence.
This evolution also expands the pool of potential victims. Historically, poorly written phishing emails acted as a crude filter, selecting only the most gullible targets. Generative AI has removed that filter. Scammers can now produce polished, fluent, and culturally tailored messages at an unprecedented scale. The “red ocean” of obvious scams has become a “blue ocean” of convincing fraud, making malicious communications far harder for both individuals and automated systems to detect.
The central concern is not that AI has created a wave of criminal masterminds. There are no mythical, AI-only super attacks. The danger is that AI is making cybercrime feel easy, normalizing reckless behavior. It encourages acting on AI output without understanding it, prioritizing speed over caution. This mentality of confidence without comprehension doesn’t just empower criminals; it mirrors risky trends in legitimate business, such as over-automation and reduced human oversight. The underground is not waiting for perfect AI; it is already acting on imperfect results, and that is more than enough to scale threats.
Effectively countering this trend requires a shift from reactive to proactive defense. Organizations need visibility into how these techniques are developed and sold before they are deployed at scale. Continuous monitoring of dark web sources, including forums, marketplaces, and messaging channels, is critical to exposing early signals. This includes tracking discussions around jailbreak techniques, malicious AI workflows, and the commercialization of tools like FraudGPT. Understanding the attacker’s mindset and emerging abuse patterns provides defenders with the crucial lead time needed to harden defenses.
In essence, AI has not reinvented cybercrime. It has, however, fundamentally changed how cybercriminals view their own capabilities. In these spaces, AI is less a tool and more a form of permission, a way to bypass the need for deep knowledge. Vibe hacking is the embodiment of this shift: it’s about confidence without understanding. As this confidence spreads, the threat landscape becomes more crowded and more dangerous, not because the attacks are smarter, but because they are now within reach of virtually anyone.
(Source: Bleeping Computer)



