Artificial IntelligenceCybersecurityNewswireTechnology

1 in 4 UK and US Firms Hit by Data Poisoning Attacks

▼ Summary

British and American cybersecurity leaders are concerned about their expanding AI attack surface, particularly unsanctioned AI tool use and data poisoning attacks.
– A poll of 3000 IT security leaders found that 26% have suffered a data poisoning attack, which alters AI model behavior by corrupting training data.
– The report revealed that 37% of enterprises face employees using generative AI tools without permission, introducing risks like data leakage and compliance issues.
– Respondents cited AI-generated phishing (38%), misinformation (42%), and shadow AI (34%) as top emerging threats, though deepfake attacks decreased from 33% to 20%.
– Despite concerns, most leaders feel prepared to defend against AI threats, and 75% are implementing acceptable usage policies for AI to mitigate risks.

A significant number of businesses in the United Kingdom and the United States are facing a growing threat from data poisoning attacks, with new research revealing that one in four organizations have already been impacted. This alarming trend highlights the expanding vulnerabilities associated with artificial intelligence systems, particularly as companies increasingly integrate AI into critical operations.

According to a comprehensive survey of 3,000 IT security leaders across both nations, 26% of firms reported experiencing data poisoning incidents, where malicious actors deliberately manipulate training data to corrupt AI model behavior. These attacks can sabotage organizational functions or enable threat actors to bypass security measures, such as causing malware detection tools to fail. Previously considered a largely theoretical risk, data poisoning has now emerged as a tangible and widespread concern.

The study also uncovered that 37% of enterprises are grappling with unauthorized use of generative AI tools by employees. This “shadow AI” phenomenon introduces serious risks, including potential data leaks, compliance violations, and security weaknesses if unsanctioned applications are exploited. Earlier this year, for instance, vulnerabilities were discovered in DeepSeek’s R1 language model, and the company inadvertently exposed sensitive user chat histories, underscoring the dangers of poorly managed AI adoption.

Security leaders expressed mixed sentiments regarding AI’s role in their organizations. On one hand, they identified several pressing threats for the coming year, including AI-generated phishing (38%), misinformation (42%), shadow AI (34%), and deepfake impersonation during virtual meetings (28%). Interestingly, reported deepfake attacks actually decreased from 33% to 20% year-over-year.

Despite these concerns, an overwhelming majority of respondents expressed confidence in their ability to counter AI-driven threats. Nearly 90% felt prepared to defend against phishing, deepfakes, AI malware, misinformation, shadow AI, and data poisoning. Additionally, 75% of organizations are implementing acceptable use policies for AI to curb unsanctioned tool usage and strengthen governance.

Chris Newton-Smith, CEO of the firm behind the research, described AI as a double-edged sword. He emphasized that while the technology holds enormous promise, risks are evolving just as rapidly. Many organizations, he noted, rushed into AI adoption and are now confronting the consequences. Data poisoning attacks not only compromise technical systems but also threaten the integrity of essential services. Combined with the proliferation of shadow AI, these challenges underscore the urgent need for robust governance frameworks to safeguard both businesses and the public.

(Source: Info Security)

Topics

ai attack surface 95% data poisoning 93% shadow ai 92% ai-generated phishing 88% ai misinformation 87% deepfake impersonation 85% ai-driven malware 84% cybersecurity preparedness 83% ai governance 82% talent shortages 78%