AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

AI Leaders Share Their Superintelligence Concerns

▼ Summary

AI experts warn that unregulated development of superintelligence could lead to human extinction and loss of control.
– Over 19,000 people including AI pioneers signed a statement calling for halting superintelligence development until safety is proven.
– A poll shows 64% of Americans believe superhuman AI shouldn’t be developed until proven safe or never developed.
– Superintelligence refers to hypothetical AI that greatly exceeds human cognitive abilities across all domains.
– Previous calls for AI development pauses have been ignored as companies continue rapid advancement due to competitive pressure.

The conversation surrounding artificial intelligence has taken a serious turn, with prominent figures voicing deep concerns about the unchecked development of superintelligence. A stark warning from thousands of experts, including AI pioneers, highlights the potential for catastrophic outcomes if this powerful technology is not managed responsibly. They argue that the current competitive rush among labs poses an existential threat that demands immediate attention and regulation.

A recent statement from the Future of Life Institute, a nonprofit focused on existential risks from AI, calls for a halt to superintelligence development. The organization defines this concept as a hypothetical machine intelligence surpassing human cognitive abilities across all tasks. The statement insists that progress should pause until two conditions are met: a broad scientific consensus confirms its safety and controllability, and the public provides strong approval for moving forward.

Max Tegmark, President of FLI and a physicist, emphasizes the unique danger. Unlike typical technology problems that can be addressed after release, he suggests that superintelligence carries irreversible risks. Tegmark compares it to handing control of the planet to an alien intelligence, a decision he believes requires explicit public consent. Supporting this view, a poll conducted by FLI found that nearly two-thirds of American adults believe superhuman AI should not be advanced until proven safe, or should never be created at all.

The petition has garnered significant support, with over 19,000 signatures from a diverse group of leaders. Notable signatories include Geoffrey Hinton and Yoshua Bengio, often called the “Godfathers of AI” for their foundational work in neural networks. They are joined by computer scientist Stuart Russell, Apple cofounder Steve Wozniak, and various other influential personalities from technology, government, and media.

Defining “superintelligence” itself presents challenges, as the term often blurs the line between scientific possibility and promotional hype. Similar to artificial general intelligence (AGI), it describes a theoretical machine capable of outperforming the human brain in every domain. The concept was popularized by philosopher Nick Bostrom in his 2014 book, which served as a cautionary tale about self-improving AI systems that might eventually operate beyond human oversight. Bostrom’s definition refers to an intellect that greatly surpasses human cognitive performance in virtually all areas of interest, though interpretations of what constitutes “greatly exceeds” remain open to debate.

Despite these uncertainties, several companies have embraced the term. Meta recently established an internal division named Superintelligence Labs dedicated to this goal. Around the same time, OpenAI’s Sam Altman published a blog post suggesting that superintelligence is on the near horizon. Interestingly, the FLI petition references a 2015 post from Altman where he described superhuman machine intelligence as potentially the greatest threat to humanity’s future. Tegmark states that the new statement aims to stigmatize superintelligence, hoping it will eventually carry the same social disapproval as other widely condemned practices.

This is not the first time experts have raised alarms. In 2023, many of the same individuals signed an open letter advocating for a six-month pause on training advanced AI models. Although that letter sparked media discussion and public debate, it failed to slow the rapid commercialization and development of new AI systems. The competitive pressure within the completely unregulated industry proved too strong to impose any meaningful moratorium.

That competitive drive has only intensified, spilling beyond Silicon Valley to become a global issue. Some political and tech leaders frame the AI race as a critical geopolitical and economic contest, particularly between the United States and China. Concurrently, safety researchers from leading AI firms like OpenAI, Anthropic, Meta, and Google have issued smaller-scale statements about the importance of monitoring AI models for risky behaviors as the technology evolves.

(Source: ZDNET)

Topics

ai superintelligence 95% existential risk 90% ai safety 88% expert warnings 85% industry regulation 85% public concern 82% ai competition 80% technology development 78% human obsolescence 75% corporate responsibility 72%