Nvidia GeForce RTX 5000: What Makes Blackwell’s Technology Stand Out?

TECH NEWS – Nvidia’s next-generation architecture is about to be unveiled, but we’re already hearing rumors.

 

Aside from the fact that the new top-of-the-line GeForce RTX 5090 could feature 32GB of VRAM, other information has surfaced around the new architecture, Blackwell.

It could be supported by a major AI-based update that could see in-game graphics take a big step towards being rendered entirely by AI. So games could be rendered not by traditional 3D pipelines, but by neural networks! This was announced by Inno3D, one of the manufacturers, on their website, and although it doesn’t directly refer to the new Nvidia GPUs, but “new graphics cards to be shown at CES 2025” the company meant… but since DLSS technology, the ray tracing cores, was mentioned afterwards, it’s clear what they’re talking about.

What does Inno3D mention? Improved performance in AI-enabled tasks and better integration of AI into game and content creation workflows. Neural rendering capabilities could revolutionize graphics processing and rendering. Beyond gaming, improved AI-powered upscaling will also benefit content creators by offering improved quality when upscaling video content. Generative AI acceleration has been optimized to speed up generative AI tasks, in line with the growing trend of AI content creation. Finally, there’s enhanced ray tracing: improved RT cores deliver more realistic lighting, shadows, and reflections in games.

Neural rendering is the interesting one. Let’s say everything that uses Nvidia’s AI capabilities is like this.

This includes DLSS, frame generation, and ray reconstruction technology. But when Nvidia has used the term in the past, it has usually been a forward-looking effort, a technology in development. Speaking about neural rendering last year, Bryan Catanzaro, Nvidia’s vice president of applied deep learning research, explained that it was already possible to create graphics that were rendered entirely by a neural network in real-time, though not at a very high quality at the time. The company now claims that using both DLSS and Frame Generation, only one in eight pixels in the game is rendered using the traditional GPU 3D pipeline, with the rest created using various AI or neural techniques.

The implication is that the ultimate goal of neural rendering is to have every pixel rendered by AI, so that the game engine can provide data about what objects are in the scene, perhaps how they move, as well as other environmental cues and player input, and the AI does the rest. But can Blackwell do this? We will soon find out.

Source: PCGamer, HardwareLuxx

Spread the love
Avatar photo
theGeek is here since 2019.

theGeek TV