Microsoft to Keep Buying Nvidia, AMD AI Chips Despite In-House Designs

▼ Summary
– Microsoft has deployed its first in-house AI chip, the Maia 200, designed for running AI models and claims it outperforms competitors’ latest chips.
– A key driver for cloud companies developing custom chips is the ongoing difficulty and high cost of acquiring sufficient Nvidia hardware.
– Despite its own chip development, Microsoft’s CEO stated the company will continue to purchase chips from partners like Nvidia and AMD.
– The Maia 200 chip will first be used by Microsoft’s internal Superintelligence team to build its own frontier AI models.
– The chip will also support OpenAI’s models on Azure, though advanced AI hardware remains scarce for all users.
Microsoft has begun deploying its first internally designed AI accelerator, the Maia 200, within its data centers, with a broader rollout planned for the coming months. This strategic move highlights the intense competition among cloud providers to secure advanced computing power for artificial intelligence workloads. The company positions the Maia 200 as an “AI inference powerhouse,” engineered specifically for the demanding task of running trained AI models in live environments. Early performance data released by Microsoft suggests the chip delivers processing speeds that surpass comparable offerings from rivals, including Amazon’s latest Trainium chips and Google’s newest Tensor Processing Units.
This push toward custom silicon is largely driven by the persistent and costly supply constraints for high-end AI processors from market leader Nvidia. Despite successfully developing its own state-of-the-art hardware, Microsoft’s leadership has made it clear that this is not an exclusive path. CEO Satya Nadella emphasized that the company will continue its substantial purchases of AI chips from both Nvidia and AMD. He framed the strategy as one of parallel innovation and partnership, rather than a winner-takes-all competition.
“We have a great partnership with Nvidia, with AMD. They are innovating. We are innovating,” Nadella stated. He cautioned against focusing solely on who is temporarily ahead in the race, noting the need for sustained long-term advancement. His comments underscore a pragmatic approach to hardware strategy: “Because we can vertically integrate doesn’t mean we just only vertically integrate.”
The initial deployment of the Maia 200 chips will be prioritized for Microsoft’s internal “Superintelligence” team, led by former Google DeepMind co-founder Mustafa Suleyman. This team is tasked with developing the company’s own cutting-edge, or “frontier,” AI models. This initiative is part of a broader effort to cultivate in-house AI capabilities, potentially reducing future reliance on external model developers like OpenAI and Anthropic. Suleyman publicly celebrated the milestone, noting his team would be the first to utilize the new silicon in their development work.
Beyond internal use, the Maia 200 will also be made available to support OpenAI’s models running on Microsoft’s Azure cloud platform. However, the overarching challenge of securing sufficient advanced AI hardware remains, affecting both external Azure customers and Microsoft’s own engineering groups. The introduction of the Maia 200 represents a significant step in Microsoft’s multi-faceted approach to navigating the critical and competitive landscape of AI infrastructure.
(Source: TechCrunch)





