Welcome to WarBulletin - your new best friend in the world of gaming. We're all about bringing you the hottest updates and juicy insights from across the gaming universe. Are you into epic RPG adventures or fast-paced eSports? We've got you covered with the latest scoop on everything from next-level PC gaming rigs to the coolest game releases. But hey, we're more than just news! Ever wondered what goes on behind the scenes of your favorite games? We're talking exclusive interviews with the brains behind the games, fresh off-the-press photos and videos straight from gaming conventions, and, of course, breaking news that you just can't miss. We know you love gaming 24/7, and that's why we're here round the clock, updating you on all things gaming. Whether it's the lowdown on a new patch or the buzz about the next big gaming celeb, we're on it.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

Cerebras Intros 3rd Gen Wafer-Scale Chip For AI: 57x Larger Than Largest GPU, 900K Cores, 4 Trillion Transistors

Cerebras Systems has unveiled its third generation wafer scale engine chip, the WSE-3, which offers 900,000 AI-optimized cores built to train up to 24 trillion parameters.

Cerebras WSE-3 Is The Biggest AI Chip On The Planet: Trillions of Transistors To Train AI Models With Trillions of Parameters

Ever since the launch of its first Wafer Scale Engine (WSE) chip, Cerebras hasn't looked back and its third-generation solution has now been unveiled with unbelievable specifications which should be a given due to its sheer size. As the name suggests, the chip is essentially an entire wafer worth of silicon and this time, Cerebras is betting on the AI craze with some powerful specifications that are highlighted below:

Related Story India Enters The AI Race, With Plans Of Developing Mega-Supercomputer Featuring 10,000 AI GPUs
  • 4 trillion transistors
  • 900,000 AI cores
  • 125 petaflops of peak AI performance
  • 44GB on-chip SRAM
  • 5nm TSMC process
  • External memory: 1.5TB, 12TB, or 1.2PB
  • Trains AI models up to 24 trillion parameters
  • Cluster size of up to 2048 CS-3 systems

Talking about the chip itself, the Cerebras WSE-3 has a die size of 46,225mm2 which is 57x larger than the NVIDIA H100 which measures 826mm2. Both chips are based on the TSMC 5nm process node. The H100 is regarded as one of the best AI chips on the market with its 16,896 cores & 528 tensor cores but it is dwarfed by the WSE-3, offering an insane 900,000 AI-optimized cores per chip, a 52x increase.

The WSE-3 also has big performance numbers to back it up with 21 Petabytes per second of memory bandwidth (7000x more than the H100) and 214 Petabits per second of Fabric bandwidth (3715x more than the H100). The chip incorporates 44 GB of on-chip memory which is 880x higher than the H100.

Compared to the WSE-2, the WSE-3 chip offers 2.25x higher cores (900K vs 400K), 2.4x higher SRAM (44 GB vs 18 GB), and much higher interconnect speeds, all within the same package size. There are also 54% more transistors on the WSE-3 (4

Read more on wccftech.com