Nvidia CEO Jensen Huang said the company plans to sell its Blackwell GPU accelerator for AI and HPC workloads for $30,000 to $40,000. However, this is a ballpark price as Nvidia is more inclined to sell an entire stack of data center components rather than only the graphics accelerator itself.
The performance of the Blackwell-based Nvidia B200 accelerator with 192 GB of HBM3E memory is undoubtedly impressive. But these numbers are achieved thanks to a dual-chip chiplet design that contains 204 billion transistors (104 billion per die). Such a solution will cost significantly more than the GH100 single-chip accelerator with 80 GB of memory. Raymond James analysts estimate that each H100 costs about $3,100, while each B200 should cost about $6,000.
Developing the GB200 wasn't cheap, and Nvidia's current GPU architecture and design costs exceed $10 billion, according to the company's chief executive.
Last year, Nvidia's partners sold the H100 for $30-40 thousand, when demand for these accelerators was at its peak and supply was limited by TSMC's production capabilities.
It's worth considering that Nvidia doesn't actually have any desire to sell B200 modules or cards. Instead, it may be much more willing to sell DGX B200 servers with eight Blackwell GPUs or even DGX B200 SuperPODs with 576 B200 GPUs for millions of dollars each.
Jensen Huang emphasized that the company would prefer to sell supercomputers or DGX B200 SuperPODS with more hardware and software that carry premium prices. Therefore, the company does not post B200 cards or modules on its website, but only the DGB B200 and DGX B200 SuperPOD systems.
Source: tomshardware
The competition for ITS authors continues. Write an article about the development of games, gaming and gaming devices and win a professional gaming wheel Logitech G923 Racing Wheel, or one of the low-profile gaming keyboards Logitech G815 LIGHTSYNC RGB Mechanical Gaming Keyboard!