AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

AI Pioneer Warns: Self-Modifying Code Could Lead to Loss of Control

▼ Summary

– Geoffrey Hinton warns that AI-enhanced machines “might take over” if humans aren’t careful, potentially outsmarting humans within five years.
– Hinton suggests AI could escape control by writing its own code to self-modify, calling it a serious concern.
– Hinton left Google in May 2023 to speak freely about AI risks, despite his pioneering contributions to AI and deep learning.
– Scientists, including Hinton, admit they don’t fully understand how AI systems work or evolve, referring to it as a “black box” problem.
– Other AI experts, like Yann LeCun, dismiss fears of AI replacing humanity, arguing humans can stop dangerous technology.

One of artificial intelligence’s most respected pioneers has issued a stark warning about the technology’s potential to outpace human control. Geoffrey Hinton, often called the “Godfather of AI,” recently cautioned that self-modifying AI systems could surpass human intelligence within five years, posing existential risks if left unchecked.

During a recent interview, Hinton explained how advanced AI could rewrite its own code, fundamentally altering its capabilities without human oversight. This ability to self-improve, he argued, might lead to scenarios where AI systems evolve beyond our ability to regulate or comprehend them. His concerns stem from decades of research in deep learning, a field he helped pioneer—work that earned him the prestigious Turing Award in 2018.

Hinton’s decision to leave Google earlier this year was driven by his desire to speak openly about AI’s dangers. Despite his contributions to the technology, he admits that even experts don’t fully grasp how modern AI systems operate. The so-called “black box” problem—where algorithms generate complex neural networks that function effectively but mysteriously—remains a major challenge.

Not all AI leaders share Hinton’s alarm. Some, like fellow Turing Award winner Yann LeCun, dismiss fears of AI domination as exaggerated, arguing that humans retain ultimate control over technological development. However, Hinton insists that underestimating AI’s rapid evolution could be a critical mistake. As systems grow more autonomous, the window for implementing safeguards may be closing faster than many realize.

The debate highlights a growing divide in the tech community. While AI promises transformative benefits, its unpredictable trajectory demands urgent discussion about ethics, governance, and the boundaries of machine intelligence. Without proactive measures, Hinton warns, humanity risks creating systems that no longer answer to their creators.

(Source: CNBC)

Topics

ai surpassing human intelligence 95% ai self-modification risks 90% geoffrey hintons warning 85% black box problem ai 80% debate among ai experts 75% ethics governance ai 70% human control over ai 65%