Artificial IntelligenceCybersecurityNewswireTechnology

Agentic AI: A CISO’s Identity Crisis and Accountability

Originally published on: January 6, 2026
▼ Summary

– Agentic AI presents a familiar security challenge for CISOs, where business pressure for rapid deployment clashes with the need for safety, mirroring past shifts like cloud and DevOps.
– The core security problem with AI agents is not AI governance but identity, as they represent a new, complex class that behaves with human-like intent but operates at machine scale and persistence.
– Traditional identity and access management tools are inadequate because they assume human or predictable machine users, while AI agents are decentralized, dynamic, and cross-platform by default.
– To secure AI agents, CISOs must apply lifecycle management principles, ensuring clear ownership, purpose-aligned access, continuous visibility, and automatic deprovisioning, treating it as a data correlation problem across systems.
– Without proper identity governance, AI adoption risks breaches and compliance failures, but with lifecycle management and visibility, it can scale safely and sustainably.

For Chief Information Security Officers, the rise of agentic AI presents a familiar yet uniquely challenging dilemma. Business units are pushing for rapid deployment of these autonomous systems, placing security teams in the familiar position of having to enable innovation while managing unprecedented risk. The central challenge isn’t merely about AI governance; it’s fundamentally an identity crisis. Just as with cloud and SaaS adoption, identity sits at the heart of both the vulnerability and the necessary solution, and CISOs will be held accountable for the outcomes.

Security frameworks have long been built around human identities. Processes for onboarding, role definition, access review, and offboarding were designed for people. The introduction of machine identities, servers, applications, and service accounts, complicated this model, but core assumptions about centralized control and predictable behavior largely held. AI agents shatter those assumptions entirely. They represent a new class of identity, combining human-like intent and decision-making with machine-scale operation and persistence. These agents are decentralized by default, easy to create, and can act autonomously across multiple systems.

This creates a perfect storm from an identity perspective. Identity remains the most common root cause of security breaches, involving abused credentials, accumulated privileges, and unclear ownership. Agentic AI multiplies these risks. Agents are often provisioned with overly broad access to function quickly, rarely reviewed, and even more rarely decommissioned. They can persist long after projects end, becoming always-on, overprivileged targets for attackers, a risk profile highlighted in frameworks like the OWASP Top 10 for LLM Applications.

Traditional Identity and Access Management and Privileged Access Management tools are ill-equipped for this reality. They assume users are people or predictable, static workloads. AI agents do not reside in a single directory, adhere to static roles, or operate within a single platform’s boundaries. Attempting to secure them with legacy, human-centric controls creates dangerous blind spots. Relying solely on AI platform vendors for security is equally risky; just as cloud providers did not solve cloud security, AI platform providers will not solve enterprise identity risk.

The path forward requires applying a discipline CISOs already understand: lifecycle management. Scalable workforce identity security became possible only when organizations treated identity as a continuous lifecycle from provisioning to decommissioning. AI agents demand the same rigorous approach, adapted for their speed and scale. Every agent requires clear ownership tied to an identity provider, an explicit and measurable purpose, and access rights aligned with its actual functions, not what was convenient during creation. Activity must be continuously monitored to detect privilege drift, and access must be automatically revoked when agents become idle or projects conclude.

A critical shift for security leaders is recognizing that agent identity security is a data correlation challenge. You cannot assess an agent’s risk by examining the agent in isolation. The true risk is defined by what the agent can reach: the cloud roles it can assume, the SaaS applications it accesses, the data it can read or modify, and the downstream identities it employs. Effective security requires correlating identity signals across agent platforms, identity providers, infrastructure, applications, and data layers. This correlation is essential for answering critical questions during audits, board reviews, and incident response: Who had access? Why did they have it? Was it appropriate? Should it still exist?

Many organizations are currently in a reactive phase, discovering agent sprawl only after deployment. The next imperative stage is prevention. Identity discipline must be integrated earlier in the development lifecycle, at the moment of agent creation. Developers and builders need guardrails that enforce clarity around intent and scope, preventing the default to broad privileges simply to make a prototype work. Without this embedded discipline, CISOs inevitably inherit the risk and the eventual consequences.

Agentic AI is becoming a permanent fixture in enterprise operations. The pivotal question is not if it will scale, but whether it will scale safely, a determination that rests squarely with security leaders. If agent identities remain unmanaged, AI will lead to breaches, compliance failures, and executive backlash that stifles innovation. Conversely, if agent identities are governed through proactive lifecycle management and comprehensive visibility, AI can become a sustainable, agile, and secure asset. The organizations that will succeed are not those that simply approve or block agentic AI. They are the ones that can adopt it with confidence, having recognized from the outset that securing this powerful technology is an identity prerogative.

(Source: Bleeping Computer)

Topics

Agentic AI 98% identity security 97% ciso role 95% lifecycle management 90% access control 88% privilege management 87% machine identities 85% security risk 85% security governance 83% innovation tension 82%