JEDEC is nearing completion of the next version of the HBM (High Bandwidth Memory) multilayer memory standard. The committee will publish the final HBM4 specifications in the foreseeable future. This standard will bring increased bandwidth, energy efficiency, and chip capacity, which is especially important against the backdrop of growing demands for computing accelerators, including generative artificial intelligence.
The HBM4 standard provides for twice the number of channels per memory stack compared to HBM3, meaning the chips will occupy a larger physical area. For broader compatibility, it ensures that the same controller can handle both HBM3 and HBM4 if necessary. The chips will use dies with a density of 24 and 32 Gbit, and chipmakers can combine from four to 16 such layers in a single stack. As for the speed indicators of HBM4, the JEDEC committee has reached agreement on a range of up to 6.4 Gbit/s and continues to discuss higher indicators.
Among the first devices equipped with next-generation HBM memory will be Nvidia computing accelerators with the Vera Rubin architecture. Their debut is expected in late 2025 or early 2026. In the next year and a half, Nvidia’s top accelerators will be equipped with HBM3e standard chips.
Source:
TechPowerUp