AGI: The Most Dangerous Conspiracy Theory Today

▼ Summary
– AGI is discussed in mystical terms in tech circles, with figures like Ilya Sutskever leading chants about it and cofounding a startup to prevent rogue AGI.
– Sutskever exemplifies mixed motivations among AGI evangelists, building the technology’s foundations while finding it terrifying and acting out of self-interest to control it.
– AGI is presented as a monumental, apocalyptic event by believers, similar to past technological ages but distinct because AGI does not yet exist.
– The idea of AGI has evolved from a pipe dream to a dominant industry narrative, driving investments in infrastructure and shaping corporate strategies and markets.
– AI leaders promote AGI’s potential for utopian benefits like prosperity and space colonization while simultaneously warning of existential risks such as human extinction.
The concept of Artificial General Intelligence (AGI) has evolved from a speculative idea into a powerful narrative driving immense investment and shaping global technological priorities. Unlike current AI systems designed for specific tasks, AGI promises human-like reasoning and adaptability, a vision that captivates tech leaders and investors alike. Yet this very vision is increasingly resembling a modern-day belief system, complete with prophecies of salvation and doom.
In technology centers such as Silicon Valley, AGI is often discussed in almost spiritual terms. Ilya Sutskever, a cofounder and former chief scientist at OpenAI, reportedly led team chants of “Feel the AGI!” before departing the company in 2024. He left to cofound Safe Superintelligence, a startup focused on preventing, or controlling, a potential rogue AGI. This shift highlights a recurring theme among prominent AI researchers: a deep personal investment in building technology they simultaneously fear. Sutskever himself described the arrival of superintelligence as “monumental, earth-shattering,” an event that will define a clear before and after. When asked about his motivation for trying to rein in this technology, he stated plainly, “I’m doing it for my own self-interest. It’s obviously important that any superintelligence anyone builds does not go rogue.”
This blend of grand ambition and existential dread is not unique to Sutskever. Throughout history, certain eras have produced groups convinced they are witnessing a pivotal transformation. Today, that transformation is the anticipated arrival of AGI. Shannon Vallor, a technology ethics scholar at the University of Edinburgh, notes that society is accustomed to being told a new technology will redefine the future. “It used to be the computer age and then it was the internet age and now it’s the AI age,” she observes. The critical distinction, however, is that AGI remains entirely theoretical, unlike computers and the internet which materialized into tangible tools.
This gap between belief and reality is what makes the AGI phenomenon so peculiar. Having covered artificial intelligence for over ten years, I’ve watched AGI transition from a fringe concept to a central dogma energizing an entire sector. What was once a far-fetched ambition now underpins the valuation of some of the planet’s most powerful corporations and, by extension, influences the stability of financial markets. It is used to rationalize colossal investments in new power plants and data centers, all framed as essential infrastructure for a future that has not yet arrived. In their fervor, AI companies are aggressively promoting this hypothetical technology.
The promises made by industry leaders are nothing short of spectacular. AGI, they claim, will possess the collective intellect of an entire “country of geniuses,” according to Anthropic CEO Dario Amodei. Demis Hassabis of Google DeepMind predicts it will initiate “an era of maximum human flourishing, where we travel to the stars and colonize the galaxy.” Sam Altman, OpenAI’s CEO, envisions that it will “massively increase abundance and prosperity,” even encouraging people to enjoy life more and have more children. These are bold claims for a product that does not exist.
Yet the narrative is deliberately two-sided. When these same executives are not painting visions of utopia, they are warning of catastrophe. In 2023, Amodei, Hassabis, and Altman all signed a brief statement declaring that “mitigating the risk of extinction from AI should be a global priority” alongside threats like pandemics and nuclear war. Elon Musk has publicly estimated that AI has a 20% chance of annihilating humanity. This dual messaging, alternating between boundless promise and existential risk, creates a powerful feedback loop, fueling both investment and alarm in equal measure.
(Source: Technology Review)





