AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI Boom Revives Old Security Mistakes, Mandiant VP Warns

▼ Summary

– AI adoption in enterprises is reviving old security failures due to neglect of basic controls, such as unencrypted communication streams.
– Mandiant’s red team found attackers could change data classifications in AI environments to bypass protections like data loss prevention.
– In simulated attacks, red teamers used social engineering for initial access, then leveraged authorized AI to steal data and change policies.
– Organizations should establish AI security governance processes early to prevent uncontrolled AI usage and related risks.
– CISOs must not assume AI adoption replaces basic cybersecurity responsibilities, as basic controls are often missing in AI workflow deployments.

The rapid acceleration of enterprise AI adoption is not just introducing novel cyber threats , it is also breathing new life into old, avoidable security failures. That warning comes from a top executive at Mandiant, who says many organizations are so focused on futuristic risks that they are forgetting the fundamentals.

During Google Cloud Next 26, Jurgen Kutscher, VP of Mandiant Consulting at Google Cloud, told Infosecurity that the current AI boom is creating a dangerous blind spot. “A lot of the old problems are new again,” he said. “We’ve seen enterprises really worried about new AI threats like large language model poisoning while forgetting the most basic security controls.”

Mandiant’s red team has uncovered these lapses firsthand. In simulated attacks, testers exploited poorly managed AI environments to change data classifications, bypassing protections like data loss prevention (DLP) solutions. Kutscher described being “surprised” to find elementary mistakes, such as unencrypted communication streams between AI tools and browsers. “For instance, we observed an unencrypted communication stream between the AI and the browser when working with a financial company,” he noted, calling attention to how basic hygiene was being ignored.

In several engagements, Mandiant red teamers used social engineering to gain initial access, then relied on the AI to complete the attack. “Once we’re inside, we’ve had the AI do the rest for us, including data theft and everything. And I’m talking about authorized AI deployments, not even shadow AI cases, where employees have deployed AI workflows without the company’s oversight,” Kutscher explained.

To counter this, he stressed that organizations must build AI security governance processes as quickly as possible. Creating clear policies and governance frameworks is far easier than cleaning up uncontrolled AI usage after the fact. He recommended revisiting secure architecture and conducting red-team validation to ensure critical assets are properly segmented.

While acknowledging AI’s defensive potential, Kutscher urged CISOs not to treat AI adoption as a replacement for basic cybersecurity duties. “It’s possible that these mistakes partly come from the fact that CISOs aren’t always involved in the deployment of AI workflows, among many other reasons, I don’t want to speculate, but the lack of basic security controls around AI workflow deployments is there and it’s a significant risk,” he concluded.

(Source: Infosecurity Magazine)

Topics

ai security risks 95% neglected basic controls 92% ai governance processes 90% red team testing 89% social engineering ai 88% ciso responsibilities 87% old vulnerabilities revival 86% data classification bypass 85% policy vs cleanup 84% data exfiltration via ai 84%