JEDEC has published a preliminary specification for fourth-generation HBM4 memory, which promises significant increases in capacity and bandwidth for artificial intelligence and high-performance computing systems.
JEDEC presented specification of the next generation HBM4 (High-Bandwidth Memory) memory, approaching the completion of the new DRAM standard, reports Tom’s Hardware. According to published data, HBM4 will support a 2048-bit interface per stack, albeit at a lower data rate than HBM3E. In addition, the new standard provides for a wider range of memory layers, which will allow it to be better adapted to different types of applications.
The new HBM4 standard will support 24GB and 32GB stacks, and will offer 4-, 8-, 12-, and 16-layer stack configurations with TSV interconnects. JEDEC has tentatively agreed on speeds up to 6.4 Gtps, but there are ongoing discussions about the possibility of achieving even higher data rates.
A 16-layer stack based on 32-gigabit chips will be able to provide a capacity of 64 GB, meaning that in this case a processor with four memory modules will be able to support 256 GB of memory with a peak bandwidth of 6.56 TB/s using an 8192-bit interface.
Although HBM4 will have twice the number of channels per stack compared to HBM3 and a larger physical size to ensure compatibility, a single controller will be able to run both HBM3 and HBM4. However, different substrates will be required to accommodate the different form factors. Interestingly, JEDEC did not mention the possibility of integrating HBM4 memory directly into processors, which is perhaps the most intriguing aspect of the new memory type.
Earlier, SK hynix and TSMC announced a collaboration on the development of HBM4 base crystals, and later at the European Symposium 2024, TSMC confirmed that it would use its 12FFC+ (12nm-class) and N5 (5nm-class) processes to produce these crystals.
TSMC’s N5 process enables the integration of more logic and functions, with interconnect pitches ranging from 9 to 6 microns, which is critical for on-chip integration. The 12FFC+ process, based on TSMC’s 16 nm FinFET technology, will enable the production of cost-effective core dies that connect memory to host processors via silicon wafers.
It is worth noting that HBM4 is primarily designed for the needs of generative AI and high-performance computing, which require processing very large amounts of data and performing complex calculations. Therefore, it is unlikely that we will see HBM4 in client applications such as GPUs. SK hynix expects to launch HBM4 in 2026.
If you notice an error, select it with your mouse and press CTRL+ENTER.