AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

OpenAI: AI Supercharges Cybercriminal Operations

▼ Summary

OpenAI’s report reveals that cybercriminals and state-sponsored groups are using AI tools like ChatGPT for surveillance, malware creation, and phishing campaigns.
– The company has disrupted over 40 malicious networks since February 2024 for violating its usage policies, targeting groups from countries including Cambodia, Russia, and China.
AI is being integrated into existing criminal workflows to improve efficiency and reduce errors, such as generating Remote Access Trojans and obfuscating malicious code.
– Threat actors are adapting AI use to avoid detection, such as by removing identifiable markers like em-dashes from AI-generated content to evade scrutiny.
– Current AI models are not enabling novel cyberattacks but are being used to speed up traditional methods, with OpenAI emphasizing tools to benefit defenders rather than enhance offensive capabilities.

A new report from OpenAI reveals how cybercriminals and state-sponsored groups are increasingly leveraging artificial intelligence to enhance their malicious activities. The research highlights that while AI offers significant benefits for cybersecurity defense, it has also become a powerful tool for threat actors seeking to streamline operations, conduct surveillance, and spread disinformation. Since early 2024, OpenAI has disrupted more than forty malicious networks violating its usage policies, providing critical insights into emerging AI-driven threats.

The investigation identifies four major trends showing how AI is reshaping the tactics, techniques, and procedures of cybercriminals. One prominent finding is the integration of AI into existing criminal workflows to boost efficiency and reduce errors. For instance, an organized crime network based in Cambodia attempted to use ChatGPT to refine their processes. Multiple accounts were terminated for trying to generate harmful tools like Remote Access Trojans, credential stealers, and code obfuscation software.

Another concerning trend involves threat groups using multiple AI models for distinct malicious purposes. A likely Russian entity employed various AI tools to produce fraudulent video prompts, social media content, and propaganda materials. Separately, Chinese-language accounts, possibly linked to the threat group UTA0388, were banned for using ChatGPT to create phishing content and debug code. This group is known for targeting Taiwan’s semiconductor sector, academic institutions, and think tanks.

Cybercriminals are also using AI for adaptation and obfuscation to avoid detection. Networks from Cambodia, Myanmar, and Nigeria have specifically requested AI models to remove identifiable markers, such as em-dashes, from generated content. This indicates that threat actors are closely following public discussions about AI detection methods and adjusting their approaches accordingly.

State-sponsored groups are similarly exploiting AI capabilities. OpenAI recently disrupted networks associated with People’s Republic of China government entities, where accounts sought ChatGPT’s help in drafting proposals for large-scale social media monitoring systems. Some requests involved developing tools to analyze transportation bookings alongside police records, enabling surveillance of the Uyghur minority. Another case involved using AI to identify funding sources for an X account critical of the Chinese government.

Despite these developments, OpenAI emphasizes that current AI models have not been used to create entirely new forms of cyberattacks. The company notes that its systems generally refuse requests that could lead to novel offensive capabilities unknown to cybersecurity professionals. Instead, threat actors are mainly using AI to accelerate and refine existing malicious strategies. OpenAI remains committed to developing tools that support defenders and help society counter these evolving threats.

(Source: ZDNET)

Topics

ai abuse 95% cybercrime trends 90% chatgpt misuse 88% policy enforcement 85% state surveillance 85% malware development 82% phishing content 80% threat reports 80% content obfuscation 78% workflow automation 75%