AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI Intent Isn’t a Security Strategy

▼ Summary

– 65% of agentic chatbots are unused but retain live access credentials, creating orphaned access risks similar to unmanaged service accounts.
– 51% of external agent actions rely on hard-coded credentials, often due to deployment speed and a lack of identity governance during setup.
– A single malicious prompt can cascade through multi-agent pipelines without triggering SOC alerts, as authorization failures occur in context handoffs between agents.
– 81% of cloud-deployed agents use self-managed open-source frameworks, driven by greater flexibility, control, and a head start over managed offerings.
– Securing agents requires modeling their intent as enforceable access-and-behavior policies that restrict actions, even when users reprompt them outside their original configuration.

A significant portion of AI agents deployed today represent a hidden and growing security liability. Recent research reveals that 65% of agentic chatbots have never been used, yet retain live access credentials to critical systems. This pattern mirrors the historical problem of orphaned service accounts, but with a crucial difference: the risk is often concealed behind a conversational interface, making it far less visible to security teams. The rapid, experimental deployment of these systems by business units, outside traditional governance, creates a dangerous accumulation of standing privileges with unclear ownership.

The data indicates a troubling regression in security practices. Over half, 51% of external agent actions, still rely on hard-coded credentials instead of modern, delegated authentication like OAuth. This repeats a mistake the industry worked to correct a decade ago in traditional software development. The root cause is often convenience and fragmented ownership. Teams under pressure to demonstrate functionality quickly default to the simplest path, using static secrets to connect agents to tools and data sources. Breaking this pattern requires making secure identity management the default, easy option during agent configuration, treating each setup as a formal governance event that mandates scrutiny of the identity being used and the scope of its permissions.

Perhaps the most insidious risk emerges in production pipelines where agents process untrusted external input. A single prompt injection attack can cascade through a multi-agent workflow without triggering conventional security alerts. Consider a customer support pipeline where an intake agent parses a ticket and delegates tasks to downstream agents with operational permissions. If malicious instructions are embedded in the initial request, they can be passed as natural-language context between agents. Each component performs its technically allowed function in isolation, but the collective action, like resetting a password for an unauthorized account, violates business intent. SOC tooling often goes blind to this threat because it sees only a series of valid, logged events, missing the malicious reasoning chain and the context of the handoffs between agents.

The infrastructure choices for these systems further complicate security. A striking 81% of cloud-deployed agents run on self-managed, open-source frameworks rather than managed cloud offerings. This preference is driven by greater flexibility, community maturity, and the desire for deep control over orchestration and tooling. While managed services will improve, the open-source ecosystem is likely to remain dominant for sophisticated deployments. Security strategies must therefore account for a heterogeneous, self-managed environment rather than hoping a single platform will provide all necessary controls.

Central to mitigating these risks is the concept of operationalizing agent intent. This cannot remain a vague statement of purpose. It must be translated into enforceable policy that defines precise boundaries: which systems an agent can access, what actions it may perform, and under what conditions. This policy requires runtime enforcement that survives user reprompts. If a new instruction asks the agent to act outside its defined intent, the system must block the action or require human approval. Intent verification must be a continuous process, evaluating each requested action against context and permissions. Without this, a clever reprompt can easily bypass the original configuration, turning security controls into mere suggestions.

(Source: Help Net Security)

Topics

unused ai agents 95% hard-coded credentials 93% prompt injection attacks 92% self-managed frameworks 90% agent intent policy 89% identity governance gap 88% orphaned access risks 87% multi-agent pipeline vulnerabilities 86% soc visibility blindspots 85% delegated authentication 84%