Secure Your AI Agents Like Human Users

▼ Summary
– AI agents should be managed like human employees with controlled access and monitoring to ensure security.
– Unapproved “shadow AI” tools pose security risks by potentially leaking sensitive information without organizational oversight.
– Emerging markets in Asia are rapidly adopting AI technology, skipping previous hesitations seen with cloud migration.
– Many organizations lack adequate security strategies for AI, with only 10% in Australia having corresponding security plans.
– Okta promotes centralized identity security management and industry collaboration to address AI security challenges.
During a recent industry gathering in Las Vegas, conversations with Okta’s leadership highlighted a critical shift in enterprise security strategy. The rapid integration of AI agents into business operations demands a fundamental rethinking of access management, treating these digital workers with the same security rigor as human employees. This paradigm shift comes as organizations, particularly in the Asia-Pacific region, embrace artificial intelligence at unprecedented rates, often outpacing their adoption of cloud technologies in previous years.
The security implications of unmanaged AI systems present significant organizational risks. When AI agents operate without proper oversight, they can inadvertently expose sensitive data, share confidential discounts, or leak proprietary information to unauthorized parties. The challenge extends beyond approved AI tools to include “shadow AI” applications that employees implement without organizational knowledge, creating potential security vulnerabilities across enterprise systems.
Stephanie Barnett, Okta’s Vice President of Presales for APJ and Interim General Manager for the region, emphasized that AI agents require the same identity management protocols as human staff. This includes providing digital “ID badges” to control system access, maintaining comprehensive audit trails of their activities, and establishing clear boundaries for what information they can and cannot access. Without treating AI systems with the same security consideration as human users, organizations leave critical gaps in their security posture.
A particularly concerning trend involves employees granting permissions between applications without organizational visibility. When workers connect services like Box and Grammarly without oversight, they create “shadow AI” risks similar to the shadow IT challenges organizations faced with unauthorized cloud applications. This permission sprawl becomes especially problematic when constant consent pop-ups lead to user fatigue, causing employees to approve access requests without proper consideration.
The solution lies in implementing centralized control over AI consent mechanisms. By establishing micro-level security controls that operate seamlessly in the background, organizations can reduce user friction while maintaining robust security standards. This approach acknowledges that while employees form the frontline of defense, organizations bear the ultimate responsibility for creating security frameworks that are both effective and user-friendly.
Market adoption patterns reveal fascinating regional differences. Emerging markets in Asia demonstrate an “explosion of excitement” around AI implementation, showing none of the hesitation that previously accompanied cloud migration. These markets appear to be leapfrogging intermediate technology paradigms to embrace AI directly. Meanwhile, in mature markets like Australia, while over 90% of organizations report AI adoption, only about 10% have developed corresponding security strategies, creating significant protection gaps.
The conversation around identity security has evolved to become a board-level discussion in many Australian organizations. Companies increasingly recognize their responsibility to secure both customer and employee identities, elevating security from a technical concern to a strategic business priority. This aligns with the perspective that AI security fundamentally represents identity security, leveraging established identity management expertise to address new technological challenges.
Customers express enthusiasm for building secure AI agents using simplified, low-code approaches. Financial institutions from Indonesia and other regions appreciate the ability to construct properly secured AI systems from inception using intuitive “clicks, not code” methodologies. The platform provides comprehensive visibility into AI agent activities while enabling direct remediation of identified issues through identity security posture management capabilities.
Addressing security complexity requires a standards-based approach that helps overwhelmed security leaders navigate complicated compliance landscapes. By developing industry standards and protocols, the burden of constant security evaluation shifts from individual organizations to established frameworks that incorporate best practices. This collaborative mindset extends beyond individual companies, recognizing that comprehensive security requires industry-wide cooperation among competitors and partners to drive better outcomes for all customers.
(Source: ITWire Australia)




