AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Google Bets on Data Center Expert to Lead AI Race

Originally published on: December 11, 2025
▼ Summary

– Google has elevated Amin Vahdat to a new C-suite role as chief technologist for AI infrastructure, reporting directly to CEO Sundar Pichai.
– Vahdat is a long-time Google engineer and computer scientist who has been building the company’s AI backbone, including custom chips and internal networks, for 15 years.
– His recent work includes unveiling the powerful Ironwood TPU and overseeing the development of Google’s Jupiter network and Axion data center CPUs.
– This infrastructure, like the TPUs and the Borg software system, is critical for Google’s AI operations and provides a competitive edge against rivals.
– The promotion signals the strategic importance of AI infrastructure and may also be a key retention move for a vital executive.

Google has placed a significant strategic bet by appointing a key architect of its data center technology to a top leadership role, signaling the immense priority of AI infrastructure in its competitive battle. The company has elevated Amin Vahdat to the newly created position of chief technologist for AI infrastructure, where he will report directly to CEO Sundar Pichai. This move underscores the critical nature of hardware and systems engineering as Google commits to massive capital expenditures, potentially exceeding $93 billion by 2025, with expectations for even greater investment in the coming year.

Vahdat is a seasoned veteran within the organization, having spent the last fifteen years developing the foundational systems that power Google’s AI ambitions. His background is deeply academic; he holds a PhD from UC Berkeley, began his career as a research intern at Xerox PARC in the 1990s, and served as a professor before joining Google in 2010. With hundreds of published papers, his research has consistently focused on achieving extreme efficiency in large-scale computing.

His influence is already highly visible. Just months ago, in his previous role as VP and GM of ML, Systems, and Cloud AI, he publicly unveiled Google’s latest generation of custom AI chips, the Ironwood TPU. The performance metrics he shared were extraordinary, claiming a single pod of over 9,000 chips could deliver 42.5 exaflops of computing power. Vahdat highlighted the explosive growth in demand, noting that the need for AI compute has skyrocketed by a factor of 100 million in less than a decade.

Beyond public announcements, Vahdat’s work has been instrumental in building Google’s behind-the-scenes technological advantages. He has overseen the development of the custom TPU chips used for AI training and inference, a hardware edge crucial for competing with rivals like OpenAI. He also spearheads the Jupiter network, Google’s ultra-high-speed internal data center network. This system is the vital connective tissue for all services, from YouTube and Search to global AI training clusters, with Vahdat noting its capacity has reached a scale capable of supporting a simultaneous video call for every person on the planet.

His responsibilities extend further into core software systems. Vahdat has been deeply involved with Borg, Google’s sophisticated cluster management software that acts as the operational brain for its global data centers, intelligently allocating workloads across millions of servers. Additionally, he has guided the creation of Axion, Google’s first custom Arm-based data center CPU, which represents a strategic move into more efficient general-purpose computing hardware.

In essence, Vahdat’s expertise is woven into the very fabric of Google’s AI capabilities. Promoting such a pivotal internal figure to an executive role reporting to the CEO is not just a recognition of past contributions. In a fiercely competitive market where top AI talent is relentlessly pursued, this elevation also serves as a powerful retention strategy, ensuring the architect of critical infrastructure remains to execute the company’s long-term vision.

(Source: TechCrunch)

Topics

ai infrastructure 100% executive promotion 95% tpu development 90% capital expenditure 85% ai compute demand 80% data center networks 80% custom silicon 75% cluster management 75% Talent Retention 70% ai competition 65%