Artificial IntelligenceBusinessDigital MarketingNewswireTechnology

Struggling With AI ROI? The Human-Centric Solution

▼ Summary

– Businesses are widely adopting AI tools despite having low trust in them, which is hindering ROI on AI initiatives.
– A SAS-IDC study found that 78% of respondents claim complete trust in AI, but only 40% have implemented governance and explainability measures.
– Three major barriers to trusting AI and achieving ROI are weak cloud infrastructure, insufficient governance, and a lack of AI-specific skills in the workforce.
– Humans tend to trust generative AI systems more than traditional machine learning models due to their humanlike language, even though generative AI is less transparent and more prone to errors.
– The illusion of human-like interaction in generative AI creates an aura of authority, leading to misplaced trust regardless of the system’s actual reliability.

Many organizations are finding it difficult to generate a solid return on investment from their artificial intelligence projects. A recent MIT study revealed that up to 95% of enterprise AI use cases fail to deliver meaningful results. This widespread underperformance raises important questions about what’s holding businesses back from realizing AI’s promised benefits.

New research from SAS and IDC points to a critical barrier: a fundamental lack of trust in the AI systems companies are deploying. The study highlights that while businesses are rapidly adopting AI, they often don’t have confidence in the technology’s reliability or outputs. This trust gap directly impacts how deeply AI is integrated into core operations and, ultimately, its financial payoff.

Despite these trust issues, AI adoption continues to surge. The SAS-IDC survey found that 65% of organizations already use AI in some form, with another 32% planning to implement it within the next year. Gartner has projected that AI could automate or augment up to half of all internal business decision-making processes. The surprising disconnect lies in the fact that this rapid adoption is happening alongside significant skepticism about AI’s trustworthiness.

The survey, which gathered responses from more than 2,300 IT professionals and business leaders globally, uncovered a telling misalignment. While 78% of respondents claimed to have “complete trust in AI,” only 40% had actually implemented governance frameworks and explainability measures to ensure their AI systems were trustworthy. This gap between perceived trust and actual safeguards leaves much of AI’s potential untapped, according to Chris Marshall of IDC.

Three major obstacles are preventing businesses from trusting their AI tools and achieving better ROI. The study identifies weak cloud infrastructure, insufficient governance, and a shortage of AI-specific skills within the workforce as the primary roadblocks. While cloud issues and governance can often be addressed through technology upgrades and third-party partnerships, the skills gap presents a more complex challenge.

Fortunately for employees, most business leaders appear to be prioritizing training over layoffs. Developing even one AI-related skill can significantly boost an individual’s earning potential in their next role, suggesting that the human element remains a valuable asset in the AI era.

The research also uncovered a fascinating psychological bias in how people perceive different types of AI. Survey respondents reported trusting generative AI systems, like ChatGPT or Gemini, more than traditional machine learning models, even though the latter are typically more transparent and easier to understand. Generative AI’s ability to produce humanlike language creates an illusion of understanding and reliability that more mechanical systems lack.

This tendency to trust what feels human has significant implications. The study authors note that the more human an AI seems, the more trust people place in it, regardless of its actual reliability. In some cases, this trust can become excessive, leading users to form emotional attachments to chatbots or AI companions. This humanizing effect gives generative AI an aura of authority that may not be justified by its technical capabilities.

(Source: ZDNET)

Topics

ai trust 95% roi challenges 90% Generative AI 85% AI Adoption 85% ai governance 85% Skills Gap 80% human bias 80% AI Transparency 75% market research 75% data infrastructure 75%