AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI’s New Playbook for Cybersecurity Defense

▼ Summary

Enterprise security teams lack confidence in their ability to detect new, adaptive AI-powered threats from both external and internal sources.
– External AI threats are growing, with cybercriminals using AI to create adaptive, hard-to-detect attacks like polymorphic malware and deepfake social engineering.
– Internal AI risks are a serious concern, including employee misuse of public AI tools and the potential for AI agents to act as insider threats.
– Current security tools and processes are insufficient, with leaders reporting low confidence in data protection, vulnerability analysis, and incident detection against AI-era attacks.
– Organizations face barriers to improving defenses, including complex IT environments, a shortage of skilled staff, and limited budgets for upgrading outdated tools.

A new study reveals that enterprise security teams are largely unprepared for the sophisticated threats posed by artificial intelligence. Research from Lenovo, which gathered insights from 600 IT leaders globally, paints a picture of widespread anxiety regarding both external and internal AI-related dangers. Confidence in the ability of current security infrastructures to manage these emerging risks remains alarmingly low.

When it comes to dangers originating from outside the organization, the findings are stark. More than 60% of IT leaders view cybercriminals’ use of AI as a significant and growing risk. These AI-enhanced attacks possess a dangerous agility, allowing them to adapt to defenses in real time, mimic legitimate user activity, and operate seamlessly across cloud platforms, devices, and applications. Defending against techniques like polymorphic malware, deepfake-driven social engineering, and AI-powered brute-force attacks is a challenge that many respondents feel ill-equipped to handle. The report emphasizes that AI is dramatically accelerating the speed of attacks, enabling adversaries to generate malicious code and exploit vulnerabilities at a pace that traditional human-led security teams struggle to match.

The threat landscape within an organization’s own walls is equally concerning. A substantial 70% of leaders identified the misuse of public AI tools by employees as a serious concern, while over six in ten believe AI agents themselves represent an insider threat they are not ready to address. Fewer than 40% expressed confidence in their ability to manage these internal risks. Additional worries focus on the security of proprietary AI models, their training data, and the prompts used to operate them. As companies integrate AI more deeply into their operations, the potential for data poisoning or model tampering poses a direct threat to business integrity, reputation, and data confidentiality.

This heightened sense of vulnerability is compounded by significant gaps in defensive capabilities. More than half of the surveyed leaders admitted their current data protection measures are insufficient for the AI era. Critical areas such as vulnerability analysis, incident detection and response, and identity management were also flagged as inadequately prepared, with between 60% and 70% of respondents expressing doubt. These shortcomings highlight the limitations of conventional security tools, which are often based on static rules and signatures. Such approaches are ineffective against AI systems that can analyze vast datasets or malware that continuously mutates to avoid detection.

Progress in building effective AI-powered defenses is hampered by several formidable barriers. Organizations cite complex IT environments, a pronounced shortage of skilled professionals with expertise in both AI and cybersecurity, and constrained budgets as the primary obstacles. The typical enterprise infrastructure is a complicated mix of legacy and modern systems, which makes integrating new, intelligent security solutions difficult. Furthermore, budget limitations often force teams to continue relying on outdated tools despite their known inadequacies.

The path forward requires a strategic and multi-faceted approach. The report recommends consolidating security telemetry from endpoints, applications, and cloud environments to eliminate blind spots and enhance overall visibility. Establishing clear AI usage policies for staff is essential, as is securing the entire AI development lifecycle against manipulation. Training employees to recognize AI-enabled threats, such as voice and video impersonation attacks, is another critical priority. From a technological standpoint, unifying monitoring systems and adopting AI-based analytical tools can help defenders operate at the machine speed necessary to counter modern threats. As one industry leader noted, the balance of power in cybersecurity has shifted, necessitating intelligent, adaptive defenses that leverage AI to protect vital assets and data.

(Source: HelpNet Security)

Topics

ai threats 95% external risks 90% ai defense 88% internal risks 88% defense confidence 85% defensive gaps 85% attack acceleration 82% ai misuse 80% traditional limitations 80% ai agents 78%