Artificial IntelligenceBigTech CompaniesNewswireTechnology

Microsoft’s 132-Core Azure Cobalt 200 CPU Targets Performance Boost

▼ Summary

– Microsoft’s Azure Cobalt 200 is a 132-core Arm-based server processor built on TSMC’s 3nm process, designed to enhance performance in Azure’s general-purpose compute tiers.
– The processor delivers over 50% higher performance than its predecessor, the Cobalt 100, and is Microsoft’s most efficient data center CPU to date.
– It features hardware accelerators for compression and cryptography, offloading tasks like I/O encryption to improve performance in workloads such as SQL Server.
– Cobalt 200 supports always-on memory encryption and Arm’s Confidential Compute Architecture, enabling secure multitenancy for regulated enterprises.
– The chip is now in production in Microsoft’s data centers, with broader customer availability planned for 2026, positioning it as a key in-house silicon for Azure’s cloud services.

Microsoft has officially launched its new Azure Cobalt 200 server processor, a 132-core Arm-based CPU engineered to deliver substantial performance improvements across Azure’s general-purpose computing services. Built on TSMC’s advanced 3nm manufacturing process, this chip represents a major step forward in Microsoft’s strategy to develop custom silicon tailored to its cloud infrastructure needs.

The Cobalt 200 leverages the Arm Neoverse platform, specifically adopting the latest CSS V3 subsystem. It integrates two 66-core chiplets to achieve a total of 132 cores, making it the most efficient data center processor Microsoft has produced to date. Internal testing reveals that the chip offers more than 50% higher performance compared to the earlier Cobalt 100 across a diverse range of real-world workloads.

This processor marks the first time Microsoft has utilized TSMC’s 3nm node for an Azure CPU. Each of the 132 cores is an Arm Neoverse V3 design with a dedicated 3MB L2 cache. The chip also incorporates 12 memory channels, and Microsoft has customized the memory controller to support always-on memory encryption as well as Arm’s Confidential Compute Architecture. This ensures that tenant memory remains isolated from the host or hypervisor, maintaining strong security in multitenant cloud environments with minimal performance impact.

A key enhancement in the Cobalt 200 is the integration of hardware accelerators for common data center operations. Compression and cryptography engines are now embedded directly into the SoC, offloading tasks that previously consumed significant CPU resources. Microsoft points to SQL Server as one workload that benefits immediately, with I/O encryption now handled directly on the silicon.

Thermal and power efficiency remain central to the Cobalt 200’s design. Microsoft states the processor not only delivers over 50% higher performance than its predecessor but also maintains its position as Azure’s most power-efficient platform. Each core supports dynamic voltage and frequency scaling, enabling the chip to adjust power consumption dynamically based on workload requirements. Combined with the 3nm process and dedicated accelerators, the architecture is built to maximize throughput without forcing all cores to run at peak frequency continuously.

The Cobalt 200 is deployed as part of a comprehensive hardware stack that includes Microsoft’s own CPUs, networking components, storage offload engines, and a new hardware security module. Each server node pairs the processor with Azure Boost, a data processing unit that manages software-defined networking and remote storage. This arrangement offloads packet processing and I/O scheduling from the CPU, freeing up cores to focus exclusively on application and service workloads.

In a cloud market increasingly focused on AI accelerators, the Cobalt 200 occupies a distinct position. It is not designed to compete with specialized AI training chips like NVIDIA’s H200 or Amazon’s Trainium2. Instead, its role is to provide high sustained throughput for foundational Azure services, such as web front ends, microservices, transactional databases, streaming data ingestion, and CPU-bound AI inference tasks. In these areas, core count, cache size, and memory bandwidth are the most critical performance factors.

A relevant comparison is Amazon’s Graviton3, which brought notable price-performance benefits to AWS for web and database applications. Microsoft, however, has pursued a more customized approach. The Cobalt 200’s 132-core configuration and 3nm process give it a larger execution footprint than Graviton3’s 64 cores on 5nm. The integrated compression and cryptography blocks are optimized for specific Azure workloads like SQL Server, Cosmos DB, and large-scale telemetry processing. Arm’s own performance data for the CSS V3 subsystem indicates per-socket gains of up to 50% over N2-based designs, aligning with Microsoft’s internal benchmarks.

Where the Cobalt 200 stands out is in its combination of general-purpose compute capability and robust security for multitenant environments. Always-on memory encryption and Confidential Compute Architecture support allow Microsoft to expand its confidential computing offerings without depending exclusively on x86-based technologies like AMD SEV-SNP. These security features are especially important for regulated enterprises selecting virtual machines, particularly as more organizations adopt Arm-based instances to reduce both costs and energy consumption.

The Cobalt 200 is already running in production servers within Microsoft’s data centers, with broader rollout and customer availability scheduled for 2026. Although the company has not specified which virtual machine families will use the new chip, it has confirmed that internal services, including components of Office, Teams, and Azure SQL, are already transitioning to the platform. Given the massive scale of these services, the Cobalt 200 is positioned to become one of the most widely deployed Arm-based CPUs in the cloud, rivaled only by AWS’s Graviton series.

The introduction of Cobalt also signals a broader strategic shift at Microsoft. After relying for decades on Intel and AMD, the company is now deeply committed to custom silicon design, not only for CPUs, but also for accelerators, networking, and memory infrastructure. This vertical integration gives Azure greater control over power, performance, and cost, allowing Microsoft to optimize its hardware based on internal telemetry and specific workload patterns.

As cloud infrastructure evolves to support large AI models and global inference demands, hyperscale providers are increasingly turning to in-house chips, software stacks, and development tools. The Azure Cobalt 200 is a clear indicator that the future of cloud computing will be built on custom, purpose-built silicon.

(Source: Tom’s Hardware)

Topics

server processor 95% Cloud Computing 92% arm architecture 90% performance improvements 88% custom silicon 85% 3nm process 85% power efficiency 82% hardware accelerators 80% memory architecture 78% workload offloading 77%