Artificial IntelligenceCybersecurityNewswireSecurity

AI Security Risks: Even Experts Skip Oversight

▼ Summary

– Security teams are increasingly using unapproved AI tools, creating a major blind spot known as “shadow AI,” which bypasses standard security checks.
– 86% of cybersecurity professionals use AI tools, with nearly a quarter doing so through personal accounts or unapproved extensions, often for tasks like writing detection rules or reviewing code.
– Around 30% of respondents admit inputting internal documents, emails, or customer data into AI systems, raising risks of leaks, privacy issues, and compliance violations.
– Only 32% of organizations actively monitor AI use, with 14% having no monitoring at all, leaving many companies unaware of AI-related risks.
– Responsibility for AI risk is unclear, with 39% of respondents stating no one is officially in charge, highlighting the need for better coordination and clear ownership.

Cybersecurity teams are inadvertently contributing to AI security risks by using unapproved tools in their daily workflows. A recent survey conducted by AI security firm Mindgard reveals that despite being responsible for safeguarding organizational data, many professionals bypass official channels when leveraging artificial intelligence solutions. The study, which gathered insights from over 500 attendees at major 2025 security conferences, exposes a troubling gap in oversight.

Shadow AI, the unauthorized adoption of AI tools, has emerged as a critical vulnerability within security departments. Much like shadow IT, this practice circumvents standard protocols, but the stakes are significantly higher. AI systems often handle sensitive materials, including proprietary code, internal communications, and customer information, amplifying risks like data breaches, privacy violations, and regulatory noncompliance.

READ ALSO  Security Leaders Lose Control as Shadow AI Copilots Spread

The numbers paint a concerning picture: 86% of cybersecurity experts admit to using AI tools, with nearly 25% accessing them through personal accounts or unvetted browser extensions. Three-quarters suspect their colleagues do the same, primarily for tasks like drafting detection rules, developing training materials, or analyzing code.

What makes this trend particularly alarming is the nature of the data being processed. 30% of respondents confirmed uploading internal documents and emails into AI platforms, while a similar percentage acknowledged using customer or confidential business data. Shockingly, 20% admitted submitting sensitive information themselves, and 12% were unaware of what data their teams might be feeding into these systems.

Steve Wilson, co-chair of the OWASP GenAI Security Project, emphasizes that while risks exist, they can be managed. “Uploading sensitive data to third-party SaaS tools, whether AI assistants or file-sharing platforms, always carries risk. The answer isn’t to ban AI but to implement robust governance,” he explains. Effective policies, employee training, and standardized controls could make generative AI as secure as trusted enterprise cloud services.

READ ALSO  Databricks & Noma Solve CISO AI Security Risks

Wilson also shifts focus to a broader threat landscape: “The real security perimeter today revolves around identity. Compromised credentials, insider threats, and AI-driven attacks like deepfake phishing demand immediate attention.”

Despite these challenges, oversight remains lax. Only 32% of organizations actively monitor AI usage, while 24% rely on informal methods like surveys, often inadequate for detecting actual practices. A concerning 14% have no monitoring whatsoever, leaving them vulnerable to unchecked risks.

Accountability is another gray area. 39% of respondents said their company lacks clear ownership of AI-related risks, while 38% believe security teams shoulder the responsibility. Smaller groups pointed to data science, leadership, or legal departments, underscoring the need for cross-functional collaboration and defined roles.

The findings highlight an urgent need for organizations to address shadow AI with structured policies and proactive monitoring, before vulnerabilities escalate into full-blown crises.

(Source: HelpNet Security )

Topics

shadow ai 95% unauthorized ai tool usage 90% data privacy risks 85% compliance violations 80% ai security oversight 75% organizational accountability 70% ai governance 65% identity security 60% employee training 55% cross-functional collaboration 50%
Show More

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.