AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Google, Marvell in Talks to Build New AI Chips

▼ Summary

– Google is in talks with Marvell Technology to develop two new AI chips: a memory processing unit and an inference-optimized TPU, though no contract has been signed.
– This follows Broadcom securing a long-term TPU supply agreement through 2031, with Google’s strategy being to diversify its supply chain by adding Marvell as a third design partner alongside Broadcom and MediaTek.
– The new chips focus on inference, reflecting a market shift where serving AI models to users is becoming the dominant compute cost over training.
– Marvell is a significant player in custom silicon, with major cloud clients and a recent partnership with Nvidia, positioning it at the intersection of GPU and ASIC ecosystems.
– The broader custom AI chip market is projected to grow rapidly, with Google’s multi-supplier approach aimed at mitigating pricing, supply, and strategic risks.

Recent discussions between Google and Marvell Technology signal a significant expansion of the search giant’s custom silicon strategy. While no contract has been finalized, the talks center on developing two new AI accelerator chips: a memory processing unit and a TPU optimized for inference. This move would add Marvell as a third key design partner alongside Broadcom and MediaTek, reinforcing Google’s push to diversify its supply chain and control costs as the custom ASIC market is forecast to surge.

According to reports, one proposed chip is a memory processing unit intended to complement Google’s existing Tensor Processing Units. The other is a new class of TPU engineered specifically for AI inference, the phase where trained models generate responses for users. Marvell would provide design services, a role similar to MediaTek’s involvement with Google’s recently launched Ironwood TPU. These exploratory talks follow closely on the heels of Broadcom securing a long-term agreement to supply Google with TPUs through 2031, indicating Google’s strategy is one of addition, not replacement.

This approach creates a multi-vendor architecture. Broadcom remains the primary partner for high-performance variants, MediaTek focuses on cost-optimized designs, and TSMC handles fabrication. Bringing Marvell into the fold represents a deliberate effort to foster competition across different segments of the chip program, preventing over-reliance on any single supplier.

The emphasis on inference hardware is a direct response to shifting computational demands. While training massive AI models is a monumental but finite task, inference workloads run continuously, scaling with user demand. As AI services reach hundreds of millions of users, inference becomes the dominant operational expense. Purpose-built inference silicon offers efficiency and cost advantages that general-purpose GPUs struggle to match, making it a critical competitive frontier.

Google’s seventh-generation Ironwood TPU, launched earlier this month, is explicitly billed as its first inference-optimized processor. It represents a massive leap in scale and performance. The potential Marvell-designed chips would supplement this lineup, possibly targeting different cost-performance profiles for the ballooning share of Google’s compute dedicated to serving models.

The relationship between Google and Marvell has deeper roots. In 2023, reports surfaced about a project codenamed “Granite Redux,” where Google explored using Marvell to replace Broadcom, anticipating annual savings in the billions. The current strategy appears more nuanced. Instead of a full substitution, Google is constructing a multi-supplier architecture, securing its long-term relationship with Broadcom while bringing in specialists for specific roles.

Marvell brings considerable credentials to the table. Its data centre revenue hit a record $6.1 billion in its last fiscal year, with a custom silicon business running at a $1.5 billion annual rate. The company already designs chips for other cloud giants, including Amazon’s Trainium and Microsoft’s Maia accelerator. A recent $2 billion strategic investment from Nvidia and the acquisition of Celestial AI for its photonic interconnect technology further solidify Marvell’s position at the intersection of GPU and custom chip ecosystems. The market has responded positively, with Marvell’s stock rallying approximately 50% year-to-date.

Despite Marvell’s ascent, Broadcom’s position seems unshaken. It commands over 70% market share in custom AI accelerators, with staggering revenue growth and ambitious targets. Analysts project the company will generate tens of billions from its Google and Anthropic partnerships alone in the coming years. The broader custom chip market is expanding rapidly, projected to grow 45% in 2026, significantly outpacing GPU shipment growth, and is on track to become a $118 billion market by 2033.

For Google, this evolving strategy means managing a complex web of four manufacturing partners, its in-house team, and a product portfolio spanning training, inference, and general compute. This complexity is a strategic buffer. Relying on a single supplier, even one as powerful as Nvidia, introduces significant pricing, supply, and strategic risks. The inference focus of the Marvell talks highlights where the financial pressure is greatest. With billions of AI-powered search queries, Gemini interactions, and API calls processed daily, even marginal reductions in cost per inference compound into enormous annual savings.

While these discussions are preliminary and any resulting chip would be years from production, the trajectory is clear. Google is building a resilient, multi-source supply chain capable of supporting the planet’s most demanding AI inference workloads. For Marvell, a formal Google contract would cement its status as a leading custom AI chip designer. For Google, it represents another crucial step toward ensuring no single company holds the keys to its silicon future.

(Source: The Next Web)

Topics

google ai chips 98% marvell partnership 96% inference optimization 95% broadcom agreement 93% custom asic market 92% supply chain diversification 90% tpu ironwood 88% marvell business growth 87% nvidia investment 85% cost efficiency 83%