Meta Unveils 4 New AI Chips for Its Systems

▼ Summary
– Meta announced four new custom chips (MTIA 300, 400, 450, 500) to power AI features and content ranking in its apps, developed in partnership with Broadcom and fabricated by TSMC.
– The company is using an iterative, accelerated development cycle for these chips to keep pace with rapidly evolving AI workloads, rather than traditional long-term cycles.
– The MTIA 300 is for training recommendation algorithms, while the other three chips are designed for inference, the process of running trained AI models to generate outputs.
– This chip development is part of Meta’s broader strategy to amass computing power for AI, though it will still purchase most AI hardware from firms like Nvidia and AMD in the near future.
– Meta’s announcement of this roadmap aims to counter recent reports that it was scaling back high-end, in-house chip development efforts.
Meta has introduced four new custom computer chips designed to power its artificial intelligence systems, marking a significant step in its strategy to build specialized hardware for its massive social platforms. These chips, part of the Meta Training and Inference Accelerator (MTIA) family, are engineered to handle the demanding workloads of generative AI features and the complex algorithms that rank content across Facebook and Instagram. This move underscores the company’s commitment to developing in-house silicon to keep pace with the rapid evolution of AI models, rather than relying solely on external suppliers.
Developed in partnership with Broadcom and manufactured by Taiwan Semiconductor Manufacturing Corporation (TSMC), the new semiconductors are built on the open-source RISC-V architecture. One model, the MTIA 300, is already in production. The other three, the MTIA 400, 450, and 500, are scheduled for release between early and late 2027. This accelerated timeline for a social media company to produce its own physical computing infrastructure is highly unusual, reflecting the intense pressure to innovate in the AI hardware space.
YJ Song, a Meta vice president of engineering, explained the rationale behind this iterative approach. AI models are advancing faster than traditional chip development cycles, meaning hardware can become outdated before it even launches. “We deliberately take an iterative approach,” Song stated. “Each MTIA generation builds on the last, using modular chiplets and incorporating the latest AI workload insights and hardware technologies.” This strategy allows Meta to adapt its hardware to the shifting demands of its AI systems without committing to a single, long-term design.
The MTIA 300 is primarily tasked with training the recommendation algorithms that curate content for billions of users daily. The forthcoming chips are focused on inference, the process where trained models generate outputs like text or images. Meta claims the MTIA 400 offers performance that is “competitive with leading commercial products” and is expected in data centers soon. The MTIA 450, slated for early 2027, will feature double the high-bandwidth memory of the 400 model. The flagship MTIA 500, anticipated later next year, promises even greater memory capacity and includes new technologies for handling low-precision data.
This chip development forms a core part of Meta’s broader ambition to amass vast computing resources for frontier AI research. The company first revealed its silicon plans in 2023, joining a growing trend where major software and AI firms, including OpenAI, are pursuing custom accelerators tailored to their specific needs. Notably, OpenAI has also announced a partnership with Broadcom, mirroring Meta’s path.
The announcement also serves to counter recent reports that Meta was scaling back its ambitions to create high-end chips that could rival industry leaders like Nvidia. By unveiling this detailed roadmap, Meta signals a renewed commitment to its in-house hardware program. However, designing custom silicon remains a prohibitively expensive and technically challenging endeavor. Consequently, Meta is expected to continue purchasing the bulk of its AI processors from other companies for the foreseeable future.
This dual-track strategy is evident in the company’s recent procurement activities. The MTIA reveal comes on the heels of Meta securing multibillion-dollar deals to buy chips from both Nvidia and AMD. Furthermore, the company has entered an agreement to lease processors manufactured by Google. This combination of building and buying highlights the immense scale of computing power required to fuel the next generation of AI applications.
(Source: Wired)





