AMD’s ‘Medusa Halo’ APUs May Boost Memory Bandwidth by 80% with LPDDR6

▼ Summary
– AMD is planning a high-end gaming APU refresh called Gorgon Halo, with a true next-gen upgrade, Medusa Halo, expected around 2027-28 featuring Zen 6 and RDNA 5.
– A leak suggests Medusa Halo will support LPDDR6 memory, which would significantly increase memory bandwidth compared to current Strix Halo’s LPDDR5X.
– With LPDDR6, Medusa Halo could achieve up to 460.8 GB/s of memory bandwidth on a 256-bit bus, an 80% increase, and rumors suggest a 384-bit bus could push this to 691.2 GB/s.
– This increased bandwidth is critical as AMD targets its Halo APUs for both gaming and AI applications, where performance like LLM inference heavily relies on memory throughput.
– These plans are unconfirmed by AMD, as the company has not officially announced Gorgon or Medusa Halo, and current Strix Halo products are still being released.
The next generation of AMD’s high-performance APUs, codenamed Medusa Halo, is rumored to deliver a massive leap in memory performance by incorporating support for LPDDR6 memory. While still years away from a potential 2027-2028 launch, this platform could combine cutting-edge Zen 6 CPU cores and RDNA 5 graphics with a memory subsystem offering up to 80% more bandwidth than today’s designs. This advancement is critical as these chips are engineered not just for premium gaming but also for demanding artificial intelligence workloads, where memory throughput directly impacts performance.
Current flagship APUs like Strix Halo utilize a 256-bit LPDDR5X interface, achieving a peak bandwidth of 256 GB/s. The expected refresh, Gorgon Halo, might push this slightly higher. However, the shift to LPDDR6 technology represents a true generational jump. Even maintaining the same 256-bit bus width, LPDDR6’s projected 14,400 MT/s speed would enable a theoretical bandwidth of 460.8 GB/s. Some earlier speculation suggests Medusa Halo could feature an even wider 384-bit memory bus, which would catapult available bandwidth to an extraordinary 691.2 GB/s.
This focus on memory is strategically important. For large language model inference and other AI tasks, the ability to quickly move data is paramount. A powerful integrated graphics processor also benefits immensely from high-bandwidth memory, as it shares this resource with the CPU cores. By dramatically increasing memory throughput, Medusa Halo would be positioned as a formidable solution for next-generation AI PCs and high-end mobile gaming systems.
The competitive landscape is also evolving rapidly. Intel’s upcoming Panther Lake mobile processors will support faster LPDDR5X memory, though Intel has indicated it does not plan to build an integrated GPU as large as AMD’s Halo designs. Meanwhile, Apple’s unified architecture in its M-series chips, like the M3 Ultra, already delivers immense bandwidth through a very wide memory interface. AMD’s rumored roadmap aims to close this gap and potentially set a new standard for x86 APU memory performance.
It is crucial to view all these details as unconfirmed rumors. AMD has not officially announced a product named Medusa Halo, and the company’s public roadmap only extends so far. The recent introduction of new Strix Halo variants confirms that the current generation still has life, and most of AMD’s lineup through 2027 is expected to utilize RDNA 3.5 graphics. Medusa Halo would likely be a specialized, high-end exception featuring the more advanced RDNA 5 architecture. If the leaks hold true, this future APU could redefine performance expectations for integrated graphics and AI acceleration.
(Source: Tom’s Hardware)



