AI & TechArtificial IntelligenceBusinessNewswireTechnology

AI Agents Outpacing Safety, Deloitte Warns

Originally published on: January 22, 2026
ā–¼ Summary

– Business adoption of AI agents is growing rapidly, with current usage at 23% of companies expected to jump to 74% within two years.
– This rapid adoption is outpacing the implementation of safety protocols, with only about 21% of companies having robust oversight mechanisms in place.
– AI agents, which can perform multistep tasks autonomously, introduce greater risks than simpler chatbots, including unexpected behaviors and vulnerability to attacks.
– Multiple studies confirm this safety gap, showing widespread AI agent use without corresponding policies, training, or organizational awareness of the risks.
– Deloitte recommends establishing clear governance, including boundaries for agent autonomy, real-time monitoring systems, and audit trails to manage risk.

A significant gap is emerging between the rapid adoption of AI agents in business and the implementation of essential safety protocols, a new industry report warns. Deloitte’s latest State of AI in the Enterprise study reveals that while nearly a quarter of companies are already using these autonomous tools, only a small fraction have established robust governance to manage the associated risks. This disparity highlights a critical vulnerability as organizations increasingly rely on artificial intelligence to handle complex, multi-step tasks with minimal human oversight.

The research, which surveyed more than 3,200 business leaders globally, projects a dramatic surge in adoption. The portion of companies using AI agents at least moderately is expected to leap from 23% today to 74% within the next two years. Conversely, the group not using them at all will likely shrink from 25% to a mere 5%. This explosive growth trajectory underscores the urgency for developing parallel safety frameworks. Currently, only about 21% of respondents confirmed their organizations have strong oversight mechanisms to prevent potential harms caused by these agents.

The core risk stems from the autonomous nature of AI agents. Unlike simpler chatbots that respond to direct prompts, these advanced systems are designed to interact with various digital tools, such as signing documents or initiating purchases, on behalf of an organization. This greater capability introduces a wider scope for unexpected behavior, including errors with serious consequences and susceptibility to security threats like prompt injection attacks. Major technology firms market these agents as productivity boosters, promising to free human employees from repetitive tasks. However, this very autonomy demands a new level of operational vigilance.

This concern is not isolated. Other studies echo the finding that safety measures are struggling to keep pace with deployment. One report from May indicated that while 84% of IT professionals said their employers used AI agents, only 44% reported having policies to regulate their activity. Further research has shown that many employees use AI tools daily without any formal safety training from their employers, leaving them unaware of critical privacy risks. Another poll found that nearly a quarter of workers were uncertain if their company was using AI at an organizational level, pointing to a potential transparency issue.

The current situation presents a complex challenge for business leaders. Technology inherently evolves faster than our full understanding of its pitfalls, and policy naturally lags behind practical application. The pressure to adopt cutting-edge AI tools, driven by immense cultural hype and competitive economic forces, is arguably unprecedented. It is understandable that perfectly bulletproof guardrails are not yet universal. However, the data signals a potentially dangerous divide that could widen as usage scales.

The immediate imperative is for organizations to prioritize oversight and establish clear governance structures. Businesses must define explicit boundaries for agent autonomy, specifying which decisions can be made independently and which require human review. Implementing real-time monitoring systems to track agent behavior and flag anomalies is becoming essential. Furthermore, maintaining detailed audit trails of all agent actions ensures accountability and provides a foundation for continuous improvement. Proactively managing these risks is crucial for capturing the value of AI agents while safeguarding organizational integrity.

(Source: ZDNET)

Topics

AI Adoption 95% ai agents 93% safety protocols 90% industry reports 88% governance lag 88% oversight procedures 87% autonomy risks 86% human oversight 85% business risk 85% policy implementation 84%