For many computers around the world, the arrival of DLSS and other upscaling technologies is the cure for the inevitable obsolescence of hardware. With less power, you can achieve insane graphics, as long as technology keeps advancing. And in this regard, Nvidia is the queen with its DLSS.
It’s likely that a future version of DLSS will include full neural rendering, as explained by Bryan Catanzaro, Vice President of Applied Deep Learning Research at Nvidia.
In a roundtable discussion organized by Digital Foundry, several industry experts discussed the future of AI in the gaming business.
During the conversation, Nvidia’s Catanzaro raised some eyebrows with his candid predictions about key features of a hypothetical “DLSS 10.”
A technology that has been evolving since the RTX 20 series in 2018
We have seen significant advancements in Nvidia’s DLSS technology over the years. Initially launched with the RTX 20 series GPUs, many questioned the true value of technologies like the Tensor cores included in gaming GPUs.
The early ray tracing games and the first version of DLSS were of questionable merit. However, DLSS 2 improved the technology and made it more useful, leading to wider adoption and imitation, first through FSR2 and later with XeSS.
DLSS 3 debuted with the RTX 40 series graphics cards, introducing Frame Generation technology. With upscaling and frame generation, neural rendering potentially allows a game to fully render only 1/8 (12.5%) of the pixels.
More recently, DLSS 3.5 offered improved noise removal algorithms for ray tracing games with the introduction of ray reconstruction technology.
This timeline raises questions about the path Nvidia might take in future versions of DLSS. And, of course, “Deep Learning Super Sampling” no longer truly applies, as the last two additions have focused on other aspects of rendering.
Where is Nvidia’s DLSS headed?
Digital Foundry posed this question to the group: “Where do you see DLSS going in the future? What other problem areas could machine learning address adequately?”
Bryan Catanzaro immediately brought up the topic of full neural rendering. This idea is not as far-fetched as it may seem. Catanzaro reminded the panel that at the NeurIPS conference in 2018, Nvidia researchers demonstrated an open-world real-time rendered world using a neural network.
During that demo, the UE4 game engine provided data on what objects were in a scene, where they were, etc., and neural rendering provided all the on-screen graphics.
The graphics in 2018 were quite basic: “Nothing that came close to Cyberpunk,” Catanzaro admitted. However, advances in AI image generation have been incredible since then. Just look at the leaps in image generation by AI we’ve seen in the past year, for example.
Catanzaro suggested that the 2018 demo was a glimpse into a significant area of generative AI growth in gaming. “DLSS 10 (in a very distant future) is going to be an entirely neural rendering system,” he speculated. The result will be games “more immersive and beautiful” than most can imagine today.
Between now and DLSS 10, Catanzaro believes we will see a gradual, developer-controllable, and coherent process. Developers already have experience with tools that allow them to steer their vision using traditional game engines and 2D/3D rendering technology.
They need similar, finely controlled tools ready for generative AI, noted Nvidia’s Vice President. It appears that the future of graphics in video games isn’t just about more power but also more technology.