AI & TechArtificial IntelligenceBusinessNewswireTechnologyWhat's Buzzing

AI Won’t Think Like Humans Soon – Here’s Why We’re Asking the Wrong Question

▼ Summary

– Artificial general intelligence (AGI) remains far off, with current AI like large reasoning models (LRMs) only making tentative steps toward human-like reasoning.
– LRMs and LLMs operate on predictive analytics, not true reasoning, and their capabilities are often overhyped despite significant limitations.
– Current LRMs mimic reasoning through step-by-step chains of thought but lack genuine cognition and may produce flawed intermediate steps.
– Potential use cases for LRMs include coding, complex QA, and planning, but trust in their reliability for critical decision-making remains low.
– The goal of AI development should focus on complementing human intelligence with transparency and reliability, not replicating human reasoning flaws.

Artificial intelligence continues to make headlines, but the reality of machines thinking like humans remains firmly in the realm of science fiction. Despite advances in large language models (LLMs) and their more sophisticated counterparts, large reasoning models (LRMs), true human-like cognition is still decades away. These systems excel at pattern recognition and predictive analytics but fundamentally lack the nuanced reasoning that defines human intelligence.

The pursuit of artificial general intelligence (AGI), where machines can adapt and reason across diverse scenarios, remains an elusive goal. Current AI models operate on statistical correlations rather than genuine understanding, a limitation that becomes glaringly obvious in unpredictable real-world situations. Imagine a kitchen robot failing to react appropriately to a sudden fire or a pet disrupting meal prep, these are scenarios where human intuition outshines even the most advanced algorithms.

Industry experts caution against overestimating AI’s current capabilities. Robert Blumofe, CTO at Akamai, describes the current landscape as “AI success theatre,” where flashy demos and exaggerated claims create a false impression of progress. The truth is, today’s AI lacks the architectural foundation for true reasoning, relying instead on mimicking patterns rather than solving problems algorithmically.

Recent research from Apple underscores these limitations, revealing that LRMs, while promising, still fall short of genuine reasoning. Xuedong Huang, CTO at Zoom, notes that these models optimize for final answers rather than the reasoning process itself, often leading to flawed intermediate steps. Ivana Bartoletti, Wipro’s Chief AI Governance Officer, adds that while chain-of-thought techniques may improve, they remain a simulation of cognition, not the real thing.

Where does AI excel today? Enterprise applications like coding assistance, content generation, and customer service automation demonstrate tangible value. LRMs show promise in structured tasks, coding, complex QA, and step-based problem-solving, where verifiable outcomes matter. However, challenges persist in subjective domains like troubleshooting or multi-step planning, where human judgment remains irreplaceable.

Trust remains a critical hurdle. Salesforce’s Caiming Xiong points out that while AI can perform impressively in controlled environments, reliability in high-stakes decision-making is still lacking. The goal isn’t to replicate human cognition with all its biases but to create AI that complements human intelligence, reasoning more rigorously or transparently where needed.

The path forward involves integrating AI with traditional tools and real-time data rather than relying solely on LLMs. As Petros Efstathopoulos of RSA Conference suggests, future systems will likely combine AI with external tools like search engines and simulation environments to push beyond current limitations.

Ultimately, the conversation shouldn’t focus on whether AI can think like humans but how it can enhance human capabilities. Transparency, reliability, and ethical oversight will define the next era of AI development, one where machines don’t replace human judgment but amplify it.

(Source: ZDNET)

Topics

Artificial General Intelligence (AGI) 90% large reasoning models lrms 85% AI and Human Intelligence 80% large language models llms 80% ai limitations 80% Predictive Analytics 75% ai use cases 75% Transparency in AI 70% trust ai 70% human-like reasoning 70%