Meta Broadcom AI Chip Partnership Extended to 2029

▼ Summary
– Meta has expanded its partnership with Broadcom through 2029 to build several generations of its custom MTIA AI processors, starting with over a gigawatt of computing capacity.
– Broadcom’s CEO will leave Meta’s board to take an advisory role focused specifically on Meta’s custom chip strategy.
– The new MTIA chips will be the first custom AI silicon in the industry to use a 2-nanometer manufacturing process.
– These chips are for Meta’s internal use only, powering AI features and recommendation systems across its apps, unlike similar chips from Google or Amazon.
– The partnership is part of Meta’s massive infrastructure investment to compete in AI, which includes other large commitments to chips from AMD, Nvidia, and Arm.
In a major move to secure its artificial intelligence infrastructure, Meta has significantly deepened its collaboration with semiconductor leader Broadcom. The partnership, now extended through 2029, is centered on developing multiple generations of Meta’s proprietary MTIA processors. The initial commitment involves building over a gigawatt of computing capacity, a foundational step in what the company calls a sustained, multi-gigawatt rollout. Notably, the forthcoming chips will be the industry’s first custom AI silicon manufactured on an advanced 2-nanometer process node.
This expanded agreement solidifies Broadcom’s role in providing chip design, packaging, and networking technology for Meta’s Training and Inference Accelerator program. To avoid potential conflicts of interest, Broadcom CEO Hock Tan will transition from Meta’s board of directors to an advisory role focused specifically on this custom silicon strategy when his term concludes. The partnership’s scale is immense, with the initial one-gigawatt capacity alone representing enough power to run approximately 750,000 U. S. households.
The MTIA program is already integral to Meta’s operations. The first-generation MTIA 300 chip currently handles ranking and recommendation algorithms across Facebook, Instagram, and other platforms. The roadmap includes three more chip generations through 2027, primarily optimized for AI inference, the real-time process of generating responses to user queries. Broadcom’s Ethernet networking technology will be crucial for linking Meta’s rapidly growing clusters of AI servers.
CEO Mark Zuckerberg framed the collaboration as essential for building the “massive computing foundation” required to eventually deliver advanced AI, or “personal superintelligence,” to billions of users. This vision aligns with the staggering capital expenditure plans Zuckerberg outlined earlier this year, targeting up to $135 billion in 2026 for AI infrastructure. The company is in a fierce race to match the capabilities of rivals like OpenAI and Google.
The Broadcom deal is the latest in a string of enormous chip procurement announcements from Meta in 2025. The company has already secured commitments for six gigawatts of AMD GPUs, millions of Nvidia processors, custom designs with Arm Holdings, and substantial capacity from cloud providers such as CoreWeave and Nebius. Unlike the cloud-centric AI chips from Google or Amazon, Meta’s MTIA chips are for internal use only, powering the AI-driven features and ad-targeting systems that form the core of its business model.
This strategic push into custom silicon follows a path pioneered by Google nearly a decade ago. It represents Meta’s calculated long-term bet that purpose-built silicon, meticulously optimized for its unique workloads, will ultimately deliver superior cost efficiency at its unprecedented scale compared to relying solely on general-purpose GPUs from suppliers like Nvidia.
(Source: The Next Web)




