Build Digital Resilience for the Age of Agentic AI

▼ Summary
– Global AI investment is projected to reach $1.5 trillion in 2025, yet fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events.
– Agentic AI requires deep insight into machine data—logs, metrics, and telemetry from devices and systems—to understand context, simulate outcomes, and adapt continuously for digital resilience.
– Organizations are adopting data fabric architectures to connect and govern information across business layers, enabling real-time access to data for sensing risks, preventing problems, and sustaining operations.
– Inadequate machine data integration limits agentic AI capabilities, risks data anomalies and errors, and narrows the scope of possible use cases, similar to issues seen with earlier NLP models.
– Technology leaders should pivot to a data fabric design that weaves together fragmented assets from security, IT, and operations to support agentic AI with real-time analysis and risk management.
Building digital resilience has become a critical priority for organizations navigating the complexities of agentic AI, especially as global AI investment is set to hit $1.5 trillion in 2025. Despite this massive spending, less than half of business leaders feel certain their companies can maintain service continuity, security, and cost control during unexpected disruptions. The autonomous decision-making and deep integration of agentic AI into vital systems introduce profound new challenges, demanding a fundamental rethinking of how organizations prepare for and respond to risks.
Many are now adopting the concept of a data fabric, an integrated architecture that links and governs information across every layer of the business. By breaking down data silos and providing real-time access to enterprise-wide information, a data fabric empowers both human teams and agentic AI systems to detect risks early, prevent problems before they escalate, recover quickly from incidents, and sustain ongoing operations.
Machine data forms the essential foundation for both agentic AI and digital resilience. Earlier AI models depended heavily on human-generated content like text, audio, and video. In contrast, agentic AI requires deep insight into machine data, the logs, metrics, and telemetry produced by devices, servers, applications, and systems. For agentic AI to effectively drive resilience, it must have seamless, real-time access to this continuous data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, overlooking critical anomalies, or introducing costly errors.
As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, points out, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience. Hathi describes machine data as the heartbeat of the modern enterprise. Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It is essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.
Currently, few organizations achieve the level of machine data integration needed to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI but can also result in data anomalies and errors in outputs or actions. Natural language processing models designed before the development of generative pre-trained transformers often struggled with linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without ensuring models have a foundational fluency in machine data.
For many companies, keeping up with the rapid pace of AI innovation has been a major challenge. Hathi notes that in some ways, the speed of this innovation is starting to hurt us because it creates risks we are not ready for. The trouble is that with agentic AI’s evolution, relying on traditional large language models trained on human text, audio, video, or print data does not work when you need your system to be secure, resilient, and always available.
To address these shortcomings and build digital resilience, technology leaders should pivot toward what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management.
(Source: Technology Review)




