OpenAI’s GPT-4 Exploited by Spammers to Evade Filters Across 80,000 Websites

Researchers have uncovered that spammers harnessed OpenAI’s technology to craft highly individualized messages, effectively circumventing spam-detection mechanisms and disseminating unsolicited content to over 80,000 websites within a four-month span, as reported on Wednesday.
This revelation, detailed in a report by SentinelLabs, a division of the cybersecurity firm SentinelOne, highlights the inherent risks associated with advanced language models. Their capacity to draw from vast datasets and create content en masse for legitimate purposes equally lends itself to nefarious uses. OpenAI terminated the spammers’ account following the disclosure by SentinelLabs. However, the four-month period during which the activity went undetected illustrates a reactive rather than proactive approach to enforcement.
“You are a helpful assistant”
The spam campaign, attributed to a framework known as AkiraBot, involved automating the distribution of messages to promote dubious search optimization services targeting small and medium-sized websites. Utilizing Python scripts, AkiraBot rotated the domain names featured in the spam messages. Moreover, it leveraged OpenAI’s GPT-4o-mini chat API to create unique messages tailored to each recipient site, a strategy that likely enabled it to evade spam filters designed to block repetitive content sent en masse. These messages were transmitted through contact forms and live chat widgets embedded in the targeted websites.
(Source: Ars Technica)