Welcome to WarBulletin - your new best friend in the world of gaming. We're all about bringing you the hottest updates and juicy insights from across the gaming universe. Are you into epic RPG adventures or fast-paced eSports? We've got you covered with the latest scoop on everything from next-level PC gaming rigs to the coolest game releases. But hey, we're more than just news! Ever wondered what goes on behind the scenes of your favorite games? We're talking exclusive interviews with the brains behind the games, fresh off-the-press photos and videos straight from gaming conventions, and, of course, breaking news that you just can't miss. We know you love gaming 24/7, and that's why we're here round the clock, updating you on all things gaming. Whether it's the lowdown on a new patch or the buzz about the next big gaming celeb, we're on it.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

Elon Musk Begins Training xAI With 100,000 Liquid-Cooled NVIDIA H100 GPUs, The Most Powerful AI Training Cluster On The Planet

X Chairman, Elon Musk, announces the commencement of GROK 3 training at Memphis using the current-gen NVIDIA H100 GPUs.

The training of 'the most powerful AI cluster in the world' begins with the help of 100,000 NVIDIA H100 GPUs

The popular venture 'xAI' from the company's chairman has officially begun training on NVIDIA's most powerful data center H100 GPUs. Elon Musk proudly announced this on X, calling it 'the most powerful AI training cluster in the world!'. In the post, he said that the supercluster will be trained by 100,000 liquid-cooled H100 GPUs on a single RDMA fabric and congratulated xAI, X, and team Nvidia for starting the training at Memphis.

Related Story TSMC Turned Down NVIDIA CEO’s Request For Dedicated Packaging Production Line – Report

The training started at 4:20 am Memphis local time and according to another follow-up post, Elon claims that the world's most powerful AI will be ready by December this year. As per the reports, GROK 2 will be ready for release next month and GROK 3 by December. This came around two weeks after xAI and Oracle canceled their $10 billion server deal.

xAI was renting Nvidia's AI chips from Oracle but decided to build its own server, ending the existing deal with Oracle, which was supposed to continue for a few years. The project is now aimed at building its own supercomputer superior to Oracle and this is going to be achieved by using a hundred thousand high-performance H100 GPUs. Each H100 GPU costs roughly $30,000 and while GROK 2 did use 20,000 of them, GROK 3 requires five times the power to develop its AI chatbot.

This decision comes as a surprise since Nvidia is about to ship its newer H200 GPUs in Q3. H200 was in mass production in Q2 and uses the advanced Hopper architecture, providing better memory configuration, resulting in up to 45% better response time for generative AI outputs. Following the H200, it's not far from now when Nvidia is about to launch its Blackwell-based B100 and B200 GPUs right at the end

Read more on wccftech.com