Global Trust in GenAI Rises Despite AI Safety Gaps

▼ Summary
– Organizations prioritizing trustworthy AI are 60% more likely to double their AI project ROI, yet only 40% invest in AI governance and ethical safeguards.
– Survey respondents trust generative AI (48%) and agentic AI (33%) more than traditional AI (18%), despite traditional AI being more established and explainable.
– Major barriers to AI success include weak data infrastructure (49%), insufficient data governance (44%), and a shortage of AI skills (41%) within organizations.
– There is a contradiction between high trust in AI (78% of organizations) and low investment in making AI systems demonstrably trustworthy through governance and safeguards.
– Data management challenges for AI implementations include difficulty accessing relevant data sources (58%), data privacy/compliance issues (49%), and poor data quality (46%).
A new global study reveals a fascinating paradox in the artificial intelligence landscape: trust in generative AI is climbing significantly, even as investments in making these systems genuinely trustworthy lag behind. Organizations that actively build responsible AI frameworks are dramatically more likely to see their investments pay off, yet a minority are making the necessary commitments to governance and ethics. This gap between perception and practice highlights a critical juncture for businesses leveraging AI technologies.
The comprehensive research, drawing on a survey of over 2,375 IT and business leaders worldwide, found that generative AI and other emerging forms like agentic AI inspire the highest levels of confidence. Close to half of all respondents expressed complete trust in generative AI. In a striking contrast, less than one-fifth reported the same level of trust in traditional, more established machine learning systems. This occurs despite the fact that traditional AI is often more reliable and its decision-making processes are easier to explain.
This apparent contradiction is not lost on analysts. One research director pointed out that AI systems with human-like interactivity seem to foster greater trust, regardless of their actual accuracy or reliability. This raises a vital question for professionals: is this highly-trusted technology always truly trustworthy? The urgency of this question grows as generative AI adoption rapidly outpaces that of traditional AI, introducing new layers of risk and ethical dilemmas.
The financial implications of this trust gap are substantial. The data shows that companies identified as “trustworthy AI leaders”, those investing heavily in governance, explainability, and ethical safeguards, are 1.6 times more likely to report doubling their return on investment from AI projects. This creates a clear business case for prioritizing responsible AI, yet implementation remains low. A minimal percentage of organizations list developing an AI governance framework or a responsible AI policy among their top priorities.
Beneath the surface of these trust and investment issues lie foundational data challenges. The success of any AI initiative is inextricably linked to the quality and management of its underlying data. The study identified three primary obstacles hindering AI success: inadequate data infrastructure, insufficient data governance processes, and a troubling shortage of AI-skilled personnel. For over half of the organizations, simply accessing the right data sources presents the biggest management headache, followed closely by data privacy, compliance, and quality concerns.
A chief technology officer emphasized that for the benefit of society and business alike, establishing real trust in AI is non-negotiable. Achieving this requires the industry to boost implementation success rates, ensure humans remain in the loop to critically assess AI outputs, and for leadership to properly equip their workforce with the tools and skills needed. As AI systems grow more autonomous and integral to core operations, building them on a solid, well-governed data foundation becomes not just a technical necessity, but a commercial imperative.
(Source: ITWire Australia)


