Welcome to WarBulletin - your new best friend in the world of gaming. We're all about bringing you the hottest updates and juicy insights from across the gaming universe. Are you into epic RPG adventures or fast-paced eSports? We've got you covered with the latest scoop on everything from next-level PC gaming rigs to the coolest game releases. But hey, we're more than just news! Ever wondered what goes on behind the scenes of your favorite games? We're talking exclusive interviews with the brains behind the games, fresh off-the-press photos and videos straight from gaming conventions, and, of course, breaking news that you just can't miss. We know you love gaming 24/7, and that's why we're here round the clock, updating you on all things gaming. Whether it's the lowdown on a new patch or the buzz about the next big gaming celeb, we're on it.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

Nvidia's Jen-Hsun Huang reflects on how AI already creates pixels and entire frames, before saying that 'games will be generated with AI'

In a Q&A session at this year's Computex event, Nvidia CEO Jen-Hsun Huang was asked whether AI will be used to generate games' graphics directly, helping the traditional rasterizing method. After pointing out that neural graphics were already in use, through the likes of frame generation, Huang went on to state that AI would go on to infuse games and PCs, creating high-resolution objects, textures, and characters.

«We already use the idea of neural graphics,» said Jen-Hsun Huang in the Q&A session. «We can achieve very high-quality ray tracing, half tracing 100% of the time and still achieve excellent performance. We also generate frames between frames, not interpolation but frame generation. And so not only that, we generate pixels, we also generate frames.»

Most PC gamers, especially those with an Nvidia graphics card in their rig, will know that AI is currently leveraged in games via DLSS—initially just an upscaling system but now comprising a frame generation system and a neural denoiser for cleaning up ray-traced images. In the case of DLSS Super Resolution, the upscaling isn't done by AI. That's handled by a normal shader routine but the resulting image is then scanned and corrected by a neural network.

In the case of DLSS Frame Generation, two previously rendered images, along with some other information from the rendering pipeline, get fed into a different neural network. This one has been trained on how motion affects images and the result is an entirely new frame, that gets inserted in between the other two frames.

It doesn't have to be a completed frame that can be upscaled or generated in this way, as textures are also a 2D grid of pixels. In theory, any data array could be processed by AI and then improved, making it higher in resolution. And that's exactly what Huang was referring to—taking meshes and textures, low in detail, and using AI to create better versions of them.

Huang agreed, saying «The future will even generate textures and generate

Read more on pcgamer.com