Altman: AI’s ‘Bubbly’ Phase Is Nothing to Fear

▼ Summary
– OpenAI CEO Sam Altman dismisses concerns about an AI bubble bursting, viewing overinvestment as a normal part of technological revolutions.
– The AI boom has created massive demand for computing power, with companies like OpenAI forming partnerships to secure GPU access.
– OpenAI partners with AMD to use multiple generations of AMD Instinct GPUs, including a one-gigawatt deployment scheduled for late 2026.
– Despite heavy AI investments, studies show most enterprises haven’t seen measurable revenue or growth returns from AI implementations.
– Altman asserts that OpenAI can effectively monetize every GPU it acquires and could develop many more products with increased compute capacity.
The current surge in artificial intelligence investment reflects a transformative technological shift rather than a speculative bubble, according to OpenAI CEO Sam Altman. While acknowledging some “bubbly” market characteristics, Altman maintains that the substantial funding flowing into AI infrastructure and development represents a legitimate response to genuine technological advancement.
During OpenAI’s recent developer conference, Altman addressed concerns about potential overinvestment by comparing the AI boom to previous technological revolutions. “Market corrections naturally occur during periods of rapid innovation,” he observed, “but this doesn’t undermine the fundamental value being created.” His comments arrive amid ongoing questions about AI’s return on investment, with research indicating that most enterprises struggle to demonstrate measurable financial benefits from their AI implementations.
Industry analysts recognize this paradox. Gaurav Gupta, a Gartner vice president specializing in emerging technologies, notes that while concrete ROI remains elusive for many organizations, major technology companies continue making massive compute investments. “We’re witnessing hyperscalers, research labs, and advertising firms allocating enormous resources to secure computing power,” Gupta explains. “This signals widespread belief that large language models require further development before reaching their full potential.”
This conviction recently manifested in AMD’s strategic partnership with OpenAI, announced just before the developer conference. The agreement involves OpenAI deploying multiple generations of AMD Instinct GPUs within its AI infrastructure, beginning with a one-gigawatt installation scheduled for late 2026. As part of the arrangement, AMD granted OpenAI warrants for up to 160 million shares, approximately 10% of the company’s outstanding common stock.
The AMD collaboration represents just one component of OpenAI’s expanding infrastructure strategy. Recent months have seen the company secure additional substantial compute resources through partnerships with Nvidia and cloud provider CoreWeave. These agreements collectively represent commitments totaling hundreds of billions of dollars, underscoring the insatiable industry demand for computational power.
This compute hunger stems from both current operational needs and future development requirements. OpenAI executives consistently emphasize that new product capabilities directly depend on expanded computing resources. The company’s recent Sora video generation tool and Pulse news feature both demonstrated how computational constraints directly influence which features become available to users.
Altman remains confident about the economic justification for these infrastructure investments. “Every additional GPU we acquire delivers immediate monetization potential,” he asserts. “If we could secure ten times our current compute capacity, we could develop numerous additional products and services that users would eagerly adopt.” This perspective suggests that for industry leaders like OpenAI, the primary constraint on growth isn’t market demand but rather access to sufficient computational resources.
(Source: ZDNET)