Cybercriminals Complain AI Slop Is Flooding Their Forums

▼ Summary
– Cybercrime forum users are complaining about the introduction of generative AI, viewing it as low-quality “garbage” that they did not ask for.
– A study analyzing 97,895 AI-related conversations on hacking forums found growing skepticism about AI’s role in hacking, shifting from initial positivity.
– Forum members are annoyed by AI-generated posts, such as bullet-pointed explainers, which they see as low-effort and damaging to community interaction.
– AI undermines the social dynamics of these forums, where users build reputations for skill, and AI-generated content is seen as a threat to that credibility.
– Despite interest in AI for hacking tasks like writing malicious code, many users resist AI posts because they value human interaction and friendship over automated content.
The same tired complaint, but coming from an unexpected source. “I’m disappointed that you are working to incorporate AI garbage into the site,” an anonymous user posted online. “No-one is asking for this , we want you to improve the site, stop charging for new features.”
This isn’t a frustrated app user lamenting the latest forced AI update. The person complaining is a member of a cybercrime forum, voicing displeasure over plans to introduce more generative AI into the platform. Scammers, grifters, and low-level hackers are now joining the ranks of millions who are fed up with AI slop cluttering their digital spaces.
“People don’t like it,” says Ben Collier, a security researcher and senior lecturer at the University of Edinburgh. In a recent study examining how low-level cybercriminals use AI, Collier and his colleagues observed growing resistance to generative AI within underground hacking communities and cybercrime forums.
Since the launch of ChatGPT in 2022, the initial excitement about AI’s potential for hacking has shifted to deep skepticism. The study, conducted with researchers from the University of Cambridge and the University of Strathclyde, analyzed 97,895 AI-related conversations on cybercrime forums through the end of last year. Complaints ranged from users posting “bullet-pointed explainers” of basic cybersecurity concepts to a flood of low-quality contributions. Some also blamed Google’s AI search overviews for reducing forum traffic.
For decades, cybercrime message boards and marketplaces, many of Russian origin, have served as hubs for illicit collaboration. These platforms facilitate the trading of stolen data, advertising of hacking gigs, and even casual trolling among rivals. Despite the constant risk of being scammed by fellow criminals, these forums foster a sense of community. Users build reputations for reliability, and forum owners host writing competitions.
“These are essentially social spaces. They really hate other people using AI on the forums,” Collier explains. He notes that the social dynamics can be disrupted when aspiring cybercriminals use AI to generate hacking guides in an attempt to boost their standing. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”
Posts reviewed by WIRED on Hack Forums, a self-described space for hacking enthusiasts, reveal clear irritation with AI-generated content. “I see a lot of members using AI for making their threads/posts and it pisses me off since they don’t even take the time to write a simple sentence or two,” one user wrote. Another was more direct: “Stop posting AI shit.”
Collier points out that in many cases, users are annoyed because they value genuine human interaction. “If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction,” one post cited in the research states.
Since ChatGPT’s debut in late 2022, interest in AI-powered hacking has surged. Both sophisticated and novice hackers have explored how the technology can enhance online crime. Some organized fraudsters have already adopted realistic AI face-swapping tools and AI-translated social engineering messages. However, much of the focus remains on generative AI’s ability to write malicious code and find vulnerabilities.
(Source: Wired)




