Speech & Audio

SK Hynix says its HBM4 is ready for mass production • The Register

SK Hynix says its HBM4 is ready for mass production • The Register


AMD and Nvidia have already announced their next-gen datacenter GPUs will make the leap to HBM4, and if SK Hynix has its way, it’ll be the one supplying the bulk of it.

On Friday, the South Korean memory giant announced that it had wrapped HBM4 development and was preparing to begin producing the chips in high volumes. The news sent SK’s share price on a 7 percent rally and for good reason.

High Bandwidth Memory (HBM) has become an essential component in high-end AI accelerators from the likes of Nvidia, AMD, and others. Both Nvidia’s Rubin and AMD’s Instinct MI400 families of GPUs, pre-announced earlier this year, rely on memory vendors having a ready supply of HBM4 in time for their debut in 2026.

The transition comes as the GPU slingers run up against the limits of existing HBM technologies, which currently top out at around 36 GB of capacity and about 1 TB/s of bandwidth per module, giving chips like Nvidia’s B300 or AMD’s MI355X about 8 TB/s of aggregate memory bandwidth.

With the move to HBM4, we’ll see bandwidth jump considerably. At GTC in March, Nvidia revealed its Rubin GPUs would pack 288 GB of HBM4 and achieve 13 TB/s of aggregate bandwidth. AMD aims to cram even greater quantities of memory onto its upcoming MI400-series GPUs, which will power its first rack-scale system called Helios.

As we learned at AMD’s Advancing AI event in June, the parts will pack up to 432 GB of HBM with an aggregate bandwidth approaching 20 TB/s.

SK Hynix says that it has effectively doubled the bandwidth of its HBM by increasing the number of I/O terminals to 2,048, twice what we saw on HBM3e. This, it argues, has also boosted energy efficiency by more than 40 percent.

While the DRAM typically in servers isn’t usually a major energy consumer, HBM is. With the shift from 24 GB on AMD’s MI300X to the 36 GB modules found on the MI325, power consumption jumped from 250 W to roughly a kilowatt per GPU.

SK Hynix says that, along with more I/O terminals and improved efficiency, its chips have also managed to exceed the JEDEC standard for HBM4, achieving 10 Gb/s operating speed.

Which of the three big HBM vendors will end up supplying these chips remains to be seen. While SK Hynix has won the majority of Nvidia’s HBM business over the past few years, Samsung and Micron are also working to bring HBM4 to the market.

Micron began sampling 36 GB 12-high HBM4 stacks to customers in June. Much like with SK Hynix, the stacks are using a 2048-bit interface and will achieve roughly twice the bandwidth of the HBM3e modules available today. The American memory vendor expects to ramp production of the stacks sometime next year.

Meanwhile, for Samsung, HBM4 presents a new opportunity to win over Nvidia’s business. The vendor has reportedly struggled to get its HBM3e stacks validated for use in the GPU giant’s Blackwell accelerators. ®

SK Hynix says its HBM4 is ready for mass production • The Register

Source link