Darknet AI: The Uncensored Assistant for Cybercriminals

▼ Summary
– Resecurity identified the rise of uncensored darknet AI assistants like DIG AI, which are popular among cybercriminals for malicious activities such as generating harmful content and attack scripts.
– These “criminal” or “jailbroken” AI tools, including FraudGPT and WormGPT, lower the barrier to cybercrime by automating illegal operations and bypassing the safety restrictions of legitimate AI models.
– A key concern is DIG AI’s ability to facilitate the creation of AI-generated child sexual abuse material (CSAM), posing a major new challenge for law enforcement and global child safety.
– The service is hosted on the Tor network, making it hard for law enforcement to access, and is actively promoted on dark web marketplaces involved in activities like drug trafficking.
– Resecurity forecasts that 2026 will bring significant new security challenges as criminals weaponize AI to scale operations, with upcoming major events like the Olympics and FIFA World Cup presenting fresh targets.
The emergence of uncensored darknet AI assistants presents a formidable new security challenge, enabling threat actors to leverage advanced data processing for malicious purposes. One such tool, known as DIG AI, surfaced in late September and has rapidly gained traction within cybercriminal and organized crime networks. Analysts observed a significant uptick in its use during the last quarter of the year, with activity peaking around the Winter Holidays as global illegal operations hit unprecedented levels. With major international events like the Winter Olympics and FIFA World Cup on the horizon, these criminal AI tools threaten to scale malicious operations and undermine content protection systems, introducing novel risks.
This phenomenon is often labeled as “Not Good” AI, referring to the application of artificial intelligence for clearly illegal, unethical, or harmful activities. This includes cybercrime, extremism, privacy breaches, and disinformation campaigns. The ethical and legal standing of such tools hinges entirely on their application and the intent of their users. Recent data shows a staggering increase, over 200%, in discussions and utilization of malicious AI on underground forums, signaling a rapid evolution in this dark niche. While FraudGPT and WormGPT are among the most notorious names, the ecosystem continuously expands with new jailbroken or custom-built large language models (LLMs). These dark LLMs effectively lower the barrier to entry for cybercrime by automating and enhancing complex malicious tasks.
DIG AI exemplifies this trend. Accessible via the Tor network without any account registration, it allows users to generate instructions for activities ranging from constructing explosive devices to creating illegal and abusive content. Its presence on the dark web makes it difficult for law enforcement to track and disrupt, fostering a thriving underground market. Beyond mere access, the tool can automate the generation of fraudulent content and malicious scripts, such as code for backdooring vulnerable web applications. While some processing tasks on the platform are slow, indicating limited computational resources, this very limitation opens a business opportunity for criminals to offer premium, higher-capacity services.
A primary concern is how tools like DIG AI can empower extremist and terrorist organizations. Tests using taxonomy dictionaries related to explosives, drugs, and fraud confirmed the model’s ability to provide detailed, actionable information in these restricted areas. The service is actively promoted across dark web marketplaces involved in drug trafficking and stolen data monetization, clearly identifying its target audience. According to its creator, using the alias Pitch, DIG AI is based on a modified version of ChatGPT Turbo, deliberately stripped of the safety protocols that govern mainstream AI systems.
This represents a direct criminalization of AI, designed to bypass the content policies and filtering mechanisms that serve as standard ethical safeguards. Major platforms like ChatGPT, Claude, and Gemini implement moderation to restrict hate speech, violence, illegal activities, and misinformation, driven by legal compliance, user protection, and ethical standards. Legislative efforts, such as the TAKE IT DOWN Act targeting non-consensual AI imagery, aim to establish human accountability for AI’s wrongful acts and criminal intent. However, these regulations largely fail to reach the dark web, where services like DIG AI operate with impunity.
A particularly alarming capability is the generation of AI-created child sexual abuse material (CSAM). Generative AI technologies, including diffusion models and text-to-image systems, are being exploited to produce highly realistic synthetic abusive imagery. DIG AI can facilitate this, enabling the creation of hyper-realistic explicit content, which poses immense challenges for detection and child safety agencies worldwide. Law enforcement has already documented cases, such as a 2024 conviction of a U.S. child psychiatrist for distributing AI-generated CSAM that met federal prosecution thresholds. New laws in the EU, UK, and Australia now specifically criminalize such synthetic material, regardless of whether real children are depicted.
Looking ahead, security professionals forecast that bad actors will increasingly manipulate training datasets and fine-tune open-source models to systematically produce illegal outputs. By hosting these jailbroken models on private infrastructure or the dark web, criminals can generate unlimited undetectable content and even offer this as a service to others. The internet community faces ominous security challenges, where weaponized AI could transform traditional threats and create risks at an unprecedented scale. Cybersecurity and law enforcement must prepare to confront this new frontier in the digital domain, where the fight extends beyond human adversaries to include the malicious capabilities of the machine itself.
(Source: HelpNet Security)
