Artificial IntelligenceBigTech CompaniesNewswireTechnology

Intel Unveils 160GB Energy-Efficient Inference GPU in New Annual Release

▼ Summary

– Intel revealed “Crescent Island,” a 160-GB, energy-efficient data center GPU optimized for inference workloads on air-cooled enterprise servers.
– The GPU features Intel’s Xe3P microarchitecture for performance-per-watt efficiency and supports a broad range of data types.
– Intel plans to start sampling Crescent Island with customers in the second half of 2026 as part of a new annual GPU release cadence.
– This marks Intel’s strategy to compete in the AI infrastructure market with open systems and software architecture after 15+ years of accelerator chip challenges.
– The company is developing an open software stack for heterogeneous AI systems to enable early optimizations before Crescent Island’s release.

Intel has officially launched its new Crescent Island data center GPU, a 160-gigabyte model engineered specifically for energy-efficient AI inference tasks. This product introduction signals the start of an annual GPU release schedule, a strategic move designed to strengthen Intel’s position in the competitive AI infrastructure market. The announcement was made at the 2025 OCP Global Summit, highlighting the company’s renewed focus on providing open, scalable systems for artificial intelligence applications.

Built with Intel’s Xe3P microarchitecture, the Crescent Island GPU is optimized for performance-per-watt, making it a strong candidate for air-cooled enterprise servers. It comes equipped with 160 GB of LPDDR5X memory and supports a wide variety of data types, positioning it as a cost-effective and power-conscious solution for running demanding inference workloads. According to Intel, the GPU is “power- and cost-optimized” to meet the needs of modern AI deployments.

The semiconductor firm plans to begin sampling Crescent Island with its customers during the second half of 2026. In the meantime, Intel is advancing an open and unified software stack on its existing Arc Pro B-Series GPUs. This preparatory work is intended to allow for early optimizations and smooth integration into heterogeneous AI systems before the new hardware becomes widely available.

This launch aligns with earlier reports that Intel had a lower-power server GPU in development, confirming the company’s commitment to this segment of the market. No new details were shared regarding “Jaguar Shores,” Intel’s next-generation GPU aimed at rack-scale platforms that was announced earlier this year.

Sachin Katti, Intel’s chief AI and technology officer, emphasized the product’s strengths in a recent briefing. He noted that Crescent Island offers enhanced memory bandwidth and substantial memory capacity, describing it as a “fantastic product for token clouds and enterprise-level inference.” These characteristics are expected to make it particularly valuable for businesses running large-scale AI models.

The introduction of an annual GPU release cadence follows similar strategies recently adopted by rivals Nvidia and AMD. For Intel, this represents a significant effort to recover ground in the accelerator market after more than 15 years of challenges under its last four CEOs. Katti, appointed by CEO Lip-Bu Tan in April to steer the company’s AI strategy, explained that Intel’s new vision is built around an open systems and software architecture. This approach is intended to deliver “right-sized” and “right-priced” computing power necessary for future agentic AI workloads.

He further elaborated that the company is focused on building scalable, heterogeneous systems. These systems aim to provide a zero-friction experience for agentic AI workloads while delivering the best performance per dollar. This open architecture, Katti said, will offer customers and partners greater choice at both the systems and hardware layers, creating opportunities for multiple vendors to participate. He added that as Intel develops additional disruptive technologies, they can be seamlessly integrated into this open, heterogeneous framework.

(Source: CNN)

Topics

gpu release 95% energy efficiency 90% ai inference 88% open architecture 87% memory capacity 85% AI Strategy 83% heterogeneous systems 82% market competition 80% software stack 78% microarchitecture design 77%