Stop AI Agent Threats: Why Okta’s New Security Standard is Essential

▼ Summary
– IT managers currently lack visibility and control when employees grant external applications, including AI agents, access to company data using OAuth tokens.
– Okta has proposed an open standard called Identity Assertion Authorization Grant (IAAG) to give organizations, not just end-users, the ultimate consent over such delegated access.
– This standard integrates the organization’s Identity and Access Management (IAM) system into the OAuth workflow, allowing administrators to set and enforce access policies centrally.
– Centralized control is critical as AI agent use is expected to explode, preventing security risks from autonomous agents granting themselves unchecked permissions.
– Early adopters of the IAAG standard include major tech companies, and it aims to provide IT managers with the visibility to easily view and revoke access tokens across all company resources.
The rapid rise of AI agents presents a profound security challenge for modern organizations. As employees increasingly deploy autonomous software to handle tasks, these agents require access to sensitive corporate data, often granted through existing but inadequate permission systems. This creates significant blind spots for IT managers, who lack visibility into when and how these powerful tools are connected to company resources. A new open standard, proposed by Okta and under development with the Internet Engineering Task Force, aims to close this dangerous gap by ensuring organizational identity systems have the final say over such access.
Currently, when a user connects an external application, like Slack or a new AI agent, to a work account, they typically grant permission using an OAuth token. This process, known as delegated access, often bypasses the company’s central identity and access management (IAM) system entirely. The user clicks “allow,” and the external app receives a credential that lets it act on the user’s behalf. While OAuth was a major improvement over sharing passwords, it places critical security decisions solely in the hands of individual employees. In an organizational context, the data and systems truly belong to the company, not the individual user, making this model inherently risky.
Okta’s proposed specification, known within standards bodies as the Identity Assertion Authorization Grant (IAAG), seeks to fundamentally reshape this workflow. The core idea is straightforward: the ultimate authority for granting access to corporate resources should rest with the organization’s IAM system, not the end user. In this new model, a user might still initiate a connection request, but the final approval and issuance of the OAuth token would be controlled by pre-configured IT policies. This ensures that access aligns with organizational security rules and provides a central point of visibility and control.
The timing for this standard is critical. The coming years will see an explosion in the number of AI agents operating within corporate environments. These agents could autonomously seek data access across dozens of applications, creating a sprawling, unmanageable web of permissions. Without a mechanism for centralized oversight, a single compromised or poorly designed agent could exfiltrate vast amounts of data before IT teams even realize it’s connected. The IAAG standard is designed to tame this potential chaos by bringing agent access under the same governance umbrella as human user access.
Technically, the standard enhances the traditional OAuth flow by embedding crucial identity information from the organization’s IAM system directly into the token request and issuance process. This includes the user’s official organizational identity and a record of the approving IAM system itself. These additions enable powerful new capabilities for IT administrators. For instance, if an employee with numerous active AI agents leaves the company, an IT manager could query the IAM system to see all tokens issued for that user across every connected service and revoke them instantly. Similarly, if a specific AI tool is found to be leaking data, administrators could deprovision it across the entire organization with a single action.
Major technology firms including Google, Amazon, Salesforce, Box, and Zoom are already listed as early adopters of the IAAG draft. Microsoft has also publicly stated its intention to support the standard in its Entra identity platform. This broad industry backing is essential for any open standard to gain traction and become effective. The goal is to create a universal framework that any SaaS provider can implement, finally giving organizations the tools they need to secure the coming wave of autonomous software.
The shift promises to rectify a long-standing security paradox. Today, a company can meticulously control which employees can access a financial system, yet have no oversight when that same employee grants an external AI agent full access to that system’s data. By inserting the organizational IAM into the permission chain, the new standard ensures that access policies are consistently enforced, whether the entity requesting data is a human or an automated agent. This layer of control is not about hindering productivity but about enabling the secure adoption of powerful new tools.
As the draft moves through the final stages of the IETF approval process, the focus will shift to widespread implementation across authorization servers. The transition will take time, but the foundation is being laid for a more secure operational landscape. For IT and security leaders, the message is clear: the existing model for application-to-application access is insufficient for an AI-driven future. Proactive engagement with this emerging standard will be key to maintaining visibility, control, and security as autonomous agents become embedded in everyday business processes.
(Source: ZDNET)





