Why Healthy Skepticism Is Key to AI Success

▼ Summary
– Companies are increasingly adopting generative AI selectively, with 39% of tech leaders planning regular use and 35% rapidly integrating it for bottom-line results.
– Top concerns include over-reliance on AI and potential inaccuracies, with 50% of respondents citing these as major issues affecting implementation.
– Businesses are applying AI to diverse tasks like cybersecurity (47%), software development (39%), and supply chain automation (35%) to enhance operations.
– AI ethics skills are emerging as the most in-demand competency for 2026 (44%), reflecting growing emphasis on responsible AI practices.
– Transparency about AI intentions and providing employee training are crucial for gaining workforce buy-in and managing job displacement fears.
Navigating the path to AI success requires a healthy dose of skepticism, as businesses move beyond initial excitement to demand tangible results. Generative AI has secured a prominent spot in corporate strategy discussions, yet it remains a largely unproven technology for driving business growth. Organizations are therefore proceeding with caution as they introduce these tools to their teams.
A recent survey highlights this cautious approach, revealing that 39% of technology leaders plan to use generative AI regularly but selectively in the coming months. This figure represents a significant 20% increase from the previous year. Meanwhile, 35% of respondents reported they are “rapidly integrating generative AI, and expecting bottom-line results.” An overwhelming 91% intend to increase their use of agentic AI for data analysis over the next year. The consensus is that the experimental phase is over; generative AI must now demonstrate its value by automating workflows, enhancing data accuracy, and supporting better decision-making.
“We’re entering a period of healthy skepticism that follows the natural progression of technology-adoption cycles,” observed IEEE senior member Santhosh Sivasubraman.
Even companies with a strong technological focus are blending optimism with prudence. The central challenge lies in integrating AI productively to enhance both human potential and operational processes. Carrie Rasmussen, chief digital officer at human capital management platform Dayforce, notes that AI assistants have evolved into personal productivity tools. “It serves as a coach, creator, researcher, collaborator — a magnitude of things,” she explained. “We’re extending that platform to connectors like email, Outlook, SharePoint, and HubSpot.” The next evolution will involve role-based AI technology, she added.
Business leaders are pointing to several areas where they expect AI to deliver meaningful value, with real-time cybersecurity vulnerability detection and attack prevention topping the list at 47%.
Yet the road to adopting generative AI is far from smooth. Many executives cite over-reliance on AI and accuracy concerns as major risks. Teams often assume models are more reliable than they are, and the confident tone of chatbot outputs can mask gaps in capability. In many cases, a straightforward analytical method would achieve the same result.
Evaluating productivity gains adds another layer of difficulty. Rasmussen referenced estimates suggesting that if half a company’s workforce regularly used ChatGPT, productivity might rise by 10%. She remains cautious about the figure. Before anything else, she said, organizations need a clear definition of what an active user actually is , whether that means daily usage, weekly usage, or something else entirely.
Skills development is another sticking point. Employees naturally wonder how much of their work might shift toward automation. Rasmussen said one of the most frequent questions she hears is how leaders should address those concerns. Speculating about job loss only heightens anxiety, and broad declarations from CEOs can create unnecessary tension. Her recommendation is to focus on what can be controlled: updating job descriptions, preparing for new workflows, and equipping teams with the training needed for emerging roles. Finding seasoned AI professionals remains extremely difficult.
Dayforce’s current approach relies on public large language models, including OpenAI’s ChatGPT, rather than building in-house models. Rasmussen noted that while internal discussions have begun, any future development would revolve around smaller, specialized models designed to improve tasks like sales forecasting. For now, Dayforce uses OpenAI’s base model to support RAG-enhanced retrieval and search features , an area where many organizations are concentrating their efforts.
Transparency plays a central role in securing employee support. Rasmussen stressed that the goal is to prepare current employees to step into new AI-driven roles and help shape them. Clear communication about near-term plans reduces fear and keeps teams focused on what matters: the tools, training, and opportunities directly in front of them.
The IEEE survey points to growing demand for AI ethics, which respondents ranked as the top skill for 2026 at 44%. Other sought-after capabilities include data analysis, machine learning, data modeling, and software development.
At Dayforce, the preparation work includes identifying internal AI champions who can advocate for the technology and guide their colleagues. These early adopters are testing tools, sharing examples, and offering support. The company is assessing which types of AI agents employees actually need, whether current products meet those needs, and if they’re mature enough for broad rollout. So far, Rasmussen says, many tools still aren’t ready for widespread deployment.
(Source: ZDNET)





