Artificial IntelligenceBigTech CompaniesNewswireTechnology

Microsoft’s Nadella: We Already Have the AI Data Centers

▼ Summary

Microsoft CEO Satya Nadella announced the deployment of the company’s first massive AI system, which is the first of many Nvidia AI factories to run OpenAI workloads on Azure data centers.
– Each system consists of over 4,600 Nvidia GB300 rack computers with Blackwell Ultra GPUs, connected via Nvidia’s InfiniBand networking technology acquired through Mellanox.
Microsoft plans to deploy hundreds of thousands of Blackwell Ultra GPUs globally, with the announcement strategically timed after OpenAI’s recent data center deals with Nvidia and AMD.
– The company emphasized its existing global infrastructure of over 300 data centers in 34 countries, positioning it to meet current and future AI demands, including models with hundreds of trillions of parameters.
– More details on Microsoft’s AI workload expansion will be shared by CTO Kevin Scott at TechCrunch Disrupt in San Francisco from October 27 to 29.

Microsoft CEO Satya Nadella has unveiled the company’s first large-scale AI infrastructure system, signaling a major expansion of its cloud computing muscle. In a social media post, Nadella shared a video of what he described as the initial deployment of an AI “factory”, a term popularized by Nvidia, and confirmed this marks the first of many such systems set to roll out across Microsoft Azure’s worldwide data center network. These installations are designed specifically to handle workloads from OpenAI.

Each AI factory consists of a cluster with more than 4,600 Nvidia GB300 rack computers, all equipped with the highly sought-after Blackwell Ultra GPU chip. The units are interconnected using Nvidia’s ultra-fast InfiniBand networking technology. Nvidia’s early investment in InfiniBand, through its $6.9 billion acquisition of Mellanox in 2019, has given it a dominant position in the high-performance networking space alongside its AI processors.

Microsoft has committed to deploying hundreds of thousands of Blackwell Ultra GPUs as these AI factories are installed around the globe. While the sheer scale of the systems is impressive, the timing of the announcement is equally strategic. It follows recent reports that OpenAI, Microsoft’s partner and occasional competitor, has signed major data center agreements with both Nvidia and AMD. OpenAI is projected to have secured around $1 trillion in commitments for constructing its own data centers by 2025, with CEO Sam Altman indicating that further expansion is underway.

By making this announcement, Microsoft aims to emphasize that it already possesses a mature, global data center footprint, more than 300 facilities across 34 countries, and is fully prepared to support cutting-edge AI applications right now. The company stated these powerful AI systems are not only equipped to handle current demands but are also engineered to run future AI models featuring hundreds of trillions of parameters.

More details about Microsoft’s AI infrastructure scaling efforts are anticipated later this month. Kevin Scott, the company’s Chief Technology Officer, is scheduled to speak at TechCrunch Disrupt, taking place from October 27 to 29 in San Francisco.

(Source: TechCrunch)

Topics

ai systems 95% microsoft azure 90% nvidia gpus 88% openai partnership 85% data centers 82% ai factories 80% infiniband networking 75% ceo announcements 72% gpu deployment 70% ai workloads 68%