50% of Security Experts Call for GenAI Deployment Halt

▼ Summary
– 48% of security professionals advocate for a “strategic pause” in generative AI deployment to strengthen defenses, per a Cobalt report.
– 94% of surveyed security leaders noted a significant rise in genAI adoption over the past year, with 36% admitting deployment outpaces their team’s capacity.
– 72% of practitioners rank genAI as their top IT risk, citing concerns like sensitive data disclosure (46%) and model poisoning (42%).
– 32% of genAI vulnerabilities are high or critical risk, the highest among all asset types, yet only 21% are resolved, the lowest resolution rate.
– Common genAI vulnerabilities include SQL injection (19.4%) and prompt injection attacks, which can expose sensitive data or disrupt services.
Nearly half of cybersecurity professionals are urging organizations to slow down generative AI adoption until security measures catch up with emerging risks. A recent industry report reveals that 48% of security experts advocate for a temporary halt in deploying these advanced AI systems, citing growing concerns about unaddressed vulnerabilities.
The study found overwhelming evidence of rapid genAI adoption, with 94% of security leaders confirming a dramatic surge in implementation across their sectors. However, 36% admit their teams struggle to keep pace with the speed of deployment, leaving critical gaps in protection. While some argue for a strategic pause, industry voices caution that delaying innovation isn’t practical when malicious actors continuously refine their tactics.
AI-related risks now dominate security discussions, with 72% of practitioners ranking genAI as their top IT concern. The most pressing threats include sensitive data exposure (46%), model poisoning (42%), and inaccurate outputs (40%). Alarmingly, one-third of organizations neglect regular security assessments for their large language models (LLMs), despite the high stakes.
Testing data paints a troubling picture, 32% of vulnerabilities in genAI tools are classified as high or critical severity, the highest rate across all technology categories. Even worse, only 21% of these critical flaws get resolved, far below the resolution rates for traditional systems. Common weaknesses mirror classic web vulnerabilities, with SQL injection (19.4%) and cross-site scripting (9.7%) topping the list, proving that basic security hygiene remains essential even in cutting-edge AI deployments.
The report also uncovered unique LLM vulnerabilities requiring specialized expertise to detect. Prompt injection attacks bypassing content filters to generate harmful or biased responses. Experts emphasize that identifying these sophisticated threats demands human-led testing strategies rather than automated scans.
As AI capabilities expand, security frameworks must evolve just as quickly, otherwise, organizations risk building transformative technologies on shaky defensive foundations.
The findings serve as a wake-up call for businesses racing to integrate genAI: innovation without adequate safeguards could backfire spectacularly. Proactive risk management, continuous testing, and adaptive security practices are no longer optional, they’re the price of staying competitive in an AI-driven landscape.
(Source: Info Security)