Newswire

HBM4 Memory Spec Aims for Lower Costs, Not to Replace GDDR

Originally published on: December 14, 2025
▼ Summary

– JEDEC is finalizing SPHBM4, a new memory standard offering HBM4 bandwidth with a narrower 512-bit interface, higher capacity, and lower costs by using conventional organic substrates.
– SPHBM4 addresses a key HBM limitation by reducing the interface width to free up silicon space, allowing for more memory stacks and greater capacity on AI accelerators.
– It maintains HBM4’s bandwidth through 4:1 serialization and uses standard HBM4 DRAM dies, simplifying controller design and supporting up to 64GB per stack.
– While cheaper than HBM4, SPHBM4 remains more expensive than GDDR7 due to complex manufacturing and will not replace it, especially in cost-sensitive markets like gaming GPUs.
– A major advantage is enabling 2.5D integration on cheaper organic substrates without expensive interposers, lowering costs and increasing design flexibility compared to some alternatives.

The upcoming SPHBM4 memory standard is poised to deliver the high-bandwidth performance of next-generation HBM4 technology but with a crucial twist: a significantly narrower interface and a focus on reducing integration costs. By utilizing a 512-bit bus and compatibility with conventional organic substrates, this new specification from JEDEC aims to make high-bandwidth memory more accessible for certain applications. However, it is not designed to supplant the dominant GDDR memory used in graphics cards, but rather to serve a different segment of the market where its unique balance of bandwidth, capacity, and cost makes sense.

High-bandwidth memory (HBM) has traditionally offered unbeatable performance and energy efficiency through its extremely wide 1024-bit or 2048-bit interfaces. The trade-off is that these interfaces consume substantial silicon area on a processor. This physical limitation restricts how many HBM stacks can be placed on a single chip, ultimately capping the total memory capacity for powerful AI accelerators. This constraint affects not just individual chips but also the potential of large computing clusters built with them.

The proposed Standard Package High Bandwidth Memory (SPHBM4) directly tackles this issue. It slashes the interface width from 2048 bits down to 512 bits. To maintain the same total bandwidth as full HBM4, the standard employs a 4:1 serialization scheme. The exact technical method, whether through a quadrupled data rate or a new encoding scheme, remains unspecified by JEDEC. The clear objective, however, is to preserve HBM4’s aggregate bandwidth while using a far narrower physical connection.

Internally, an SPHBM4 package will incorporate a standardized base die, likely fabricated using a logic process at a foundry. It will also use standard HBM4 DRAM dies. This approach simplifies controller design at the logical level and ensures each memory stack can offer capacities on par with HBM4 and HBM4E, potentially reaching up to 64 GB. On paper, this architecture could allow a quadrupling of memory capacity compared to a standard HBM4 implementation. In reality, chip designers will face constant trade-offs, balancing increased memory capacity against the soaring costs of silicon real estate on advanced process nodes and the desire to integrate more compute power.

A natural question arises: could this technology replace GDDR7 in gaming GPUs? SPHBM4 is engineered to prioritize raw bandwidth and capacity above all else, including power consumption and cost. While it may be cheaper than full HBM4, it remains a fundamentally expensive technology. SPHBM4 still requires costly stacked DRAM dies, an interface base die, through-silicon vias (TSVs), and advanced packaging processes. These elements dominate its manufacturing cost and do not scale down with production volume as efficiently as commodity GDDR7 memory. GDDR7 benefits from the massive economies of scale driven by the consumer gaming market, simpler packaging, and mature PCB assembly techniques. Consequently, replacing several GDDR7 chips with a single SPHBM4 stack might actually increase system costs rather than lower them.

The potential advantage of SPHBM4 lies in its implementation details. JEDEC states that the 512-bit interface enables 2.5D integration on conventional organic substrates, eliminating the need for expensive silicon interposers. This could significantly lower integration costs and offer greater design flexibility compared to solutions using proprietary interfaces. Using organic substrates also allows for longer electrical channels between the processor and memory, which can ease layout challenges in large packages and support greater memory capacity near the chip. While routing such a complex interface on organic materials presents its own engineering hurdles, this approach represents a key differentiator aimed at making high-bandwidth memory more viable for a broader range of designs.

(Source: Tom’s Hardware)

Topics

sphbm4 standard 95% memory interface 90% hbm technology 88% ai accelerators 85% integration costs 85% memory capacity 82% silicon real estate 80% gddr7 memory 80% serialization technique 78% memory packaging 78%