How AI Needs Strong Data Fabric to Deliver Value

▼ Summary
– AI without business context can produce fast but wrong decisions, undermining return on investment.
– A well-designed data fabric is essential to provide context, enabling safe AI scaling and coordinated decision-making.
– Organizations are shifting from centralized data storage to connecting data across systems while preserving business semantics.
– Traditional data aggregation strategies often strip away meaning related to policies, processes, and real-world decisions.
– Preserving context allows AI to make strategic decisions, such as identifying priority customers during supply-chain disruptions.
Without the right contextual foundation, artificial intelligence can generate answers at lightning speed yet still steer a business in the wrong direction. That’s the warning from Irfan Khan, president and chief product officer of SAP Data & Analytics.
“AI is incredibly good at producing results,” he explains. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.”
As autonomous systems and intelligent applications take center stage, this contextual layer is becoming non-negotiable. To deliver it, companies need a well-designed data fabric that goes far beyond simple integration. According to Khan, the right data fabric enables organizations to scale AI safely, coordinate decisions across multiple systems and agents, and ensure automation aligns with real business priorities instead of operating in a vacuum.
Recognizing this, many enterprises are rethinking their data architecture. Rather than funneling everything into a single repository, they are seeking ways to connect information across applications, clouds, and operational systems while preserving the semantics that define how the business actually works. This shift is fueling growing interest in data fabric as a foundation for AI infrastructure.
Losing context is a critical AI problem. Traditional data strategies have long fixated on aggregation. For two decades, organizations have poured resources into pulling data from operational systems and loading it into centralized warehouses, lakes, and dashboards. While this approach simplifies reporting and performance monitoring, much of the meaning attached to that data,how it relates to policies, processes, and real-world decisions,gets stripped away in the process.
Consider two companies using AI to manage supply-chain disruptions. One relies on raw signals like inventory levels, lead times, and supplier scores. The other enriches its AI with context across business processes, policies, and metadata. Both systems will analyze data quickly, but they are likely to reach very different conclusions.
Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains allows one AI system to make strategic decisions. The other, lacking proper context, cannot, Khan notes.
“Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.”
(Source: MIT Technology Review)