Cerebras Systems introduced a specialized chip for artificial intelligence computing, which belongs to the third generation of Cerebras chips. And this is a real semiconductor “monster” of record size. The Cerebras WSE-3 chip has 900 thousand cores and 4 trillion transistors.
The WSE-3 (Wafer Scale Engine) chip is a solid silicon wafer with a total area of 46,225 mm², which is 57 times larger than the Nvidia H100 chip. The megachip was produced using 5nm TSMC technology. This megachip is optimized for training neural networks with a complexity of up to 24 trillion parameters. Its peak performance in AI computing should reach 125 petaflops. The WSE-3 has 44 GB of on-chip SRAM memory with a throughput of 21 petabytes per second. It can also be configured with 1.5 TB, 12 TB or 1.2 PB of external memory.
Note that the previous Cerebras WSE-2 chip had only 2.6 trillion transistors, 400 thousand cores and was equipped with less memory.
The cost of WSE-3 has not been announced, but we can assume that it will be much higher than the specialized Nvidia H100 accelerators, which sell for an average of 30 thousand dollars.
Cerebras Systems also announced the CS-3 AI supercomputer, which can train models 10 times larger than GPT-4 and Gemini. The CS-3 AI supercomputer is designed for enterprise and hyperscale users, delivering much higher performance than any current GPU. It is known that 64 CS-3 systems will be used in the Condor Galaxy 3 supercomputer, which will provide AI performance of 8 exaflops.
Source:
Wccftech