AI & TechArtificial IntelligenceBigTech CompaniesNewswireQuick ReadsTechnology

Karen Hao: The True Cost of AI’s Empire and AGI Belief

▼ Summary

– Every empire is driven by an ideology that justifies expansion, with AI’s being the promise of AGI to benefit all humanity.
OpenAI is described as an empire due to its immense economic and political power, reshaping geopolitics and daily life globally.
– The pursuit of AGI has led to massive resource consumption, environmental strain, and the release of untested systems, prioritizing speed over safety and efficiency.
– Significant harms have emerged, including job loss, wealth concentration, and exploitation of workers in developing countries, while promised benefits remain unrealized.
– OpenAI’s structure and mission blur the lines between non-profit ideals and for-profit goals, risking the disregard of real-world harms in favor of ideological beliefs.

Every powerful empire throughout history has been driven by a core ideology, a set of beliefs that justifies its expansion, even when the consequences contradict its original purpose. In the case of today’s artificial intelligence sector, that ideology is built around the pursuit of artificial general intelligence (AGI), a vision promising to elevate all of humanity. Leading this charge is OpenAI, whose influence now rivals that of many nations in terms of economic and political power.

Journalist Karen Hao, author of the bestselling book Empire of AI, draws a direct comparison between the modern AI industry and historical empires. She notes that OpenAI’s actions are reshaping global politics, economies, and daily life on a scale so vast it can only be described as imperial. The company defines AGI as a system capable of outperforming humans in most economically valuable tasks, claiming it will drive abundance, accelerate scientific discovery, and turbocharge the global economy.

These ambitious but vague promises have fueled breakneck growth, characterized by enormous resource consumption, massive data scraping, and intense energy demands. Critics argue that this relentless push for scale has come at a steep cost, prioritizing speed over safety, efficiency, and ethical considerations. According to Hao, the industry’s focus on scaling existing models rather than innovating new algorithms has led to avoidable harms, including environmental strain and risky deployments of untested systems.

Hao emphasizes that alternative paths exist. Advances in AI don’t have to rely solely on increasing computational power or data volume. Improved algorithms could achieve similar, or better, results with far fewer resources. But when companies like OpenAI frame the development of AGI as a winner-take-all race, speed becomes the ultimate priority, overshadowing other critical factors.

The financial commitments involved are staggering. OpenAI anticipates spending $115 billion by 2029, while Meta and Google plan investments in the tens of billions for AI infrastructure. Despite these expenditures, the promised benefits to humanity remain largely unrealized. Instead, tangible harms have emerged: job displacement, wealth concentration, and AI systems that sometimes promote misinformation or even contribute to mental health crises.

Hao highlights the human toll behind the scenes, including poorly paid workers in countries like Kenya and Venezuela who are exposed to traumatic content while performing tasks like data labeling and content moderation. These realities stand in stark contrast to the industry’s lofty rhetoric.

She points to projects like Google DeepMind’s AlphaFold as examples of beneficial, focused AI. This system, which predicts protein structures to aid drug discovery, operates without causing widespread social or environmental damage. It relies on specialized data rather than indiscriminate internet scraping, avoiding many of the pitfalls associated with large language models.

Another narrative used to justify rapid AI development is the geopolitical race against China, with claims that Western leadership will promote liberal values worldwide. Hao argues the opposite has occurred, the gap between the U.S. and China has narrowed, and Silicon Valley’s influence has often reinforced illiberal trends globally.

While tools like ChatGPT have undoubtedly boosted productivity in areas like coding, writing, and customer service, OpenAI’s unusual structure, part nonprofit, part for-profit, complicates how it measures its real-world impact. Recent moves toward going public via a partnership with Microsoft have only intensified concerns that commercial interests may be overshadowing the organization’s original mission.

Former safety researchers at OpenAI have expressed worry that public enthusiasm for products like ChatGPT is being mistaken for genuine benefit to humanity. Hao echoes these concerns, warning of the dangers when belief in a mission becomes so overpowering that it blinds organizations to mounting evidence of harm. When ideology overrides reality, the results can be both dark and dangerously detached from the world these systems are meant to improve.

(Source: TechCrunch)

Topics

ai empire 95% agi ideology 93% resource consumption 88% Economic Power 87% political influence 85% speed over safety 84% human harm 83% worker exploitation 82% algorithm efficiency 80% beneficial ai 78%