TECH NEWS – With its technology, Nvidia can almost instantly create 3D scenes from 2D photos.
Using artificial intelligence, specifically neural radiance fields (NeRF), the researchers have managed to predict the light colour radiating in any direction and thus reconstruct the scene based on the image. Nvidia says this is the fastest solution and can be achieved in milliseconds for 1080p rendering. This speed could be more than a thousand times faster than what has been previously thought.
Thomas Muller, a scientist researching the topic, gave a presentation at the GDC entitled Instant Neural Graphics Primitives, and said the impact was due to advances in three main areas: a task-specific GPU implementation of the rendering/training algorithm, using the GPU’s fine-grain control flow capabilities to be much faster than dense tensors; a fully fused implementation of a small neural network (faster than general-purpose matrix multiplication routines); and Nvidia developed a technique called multiresolution hash grid encoding – it is task agnostic and has better speed/quality tradeoff than preexisting work.
The Instant NeRF model has been implemented with the CUDA toolkit and the Tiny CUDA neural network library, and you can even look at the code here. According to Nvidia, the neural network is so lightweight that it can be run on a single GPU, especially if it has tensor cores. David Luebke, Nvidia’s vice president of graphics research, said: “If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene. In that sense, Instant NeRF could be as important to 3D as digital cameras, and JPEG compression have been to 2D photography — vastly increasing the speed, ease and reach of 3D capture and sharing.”
Instant NeRF can quickly digitise real environments and people and then use the digitised results to teach cars to drive themselves or robots to learn the shape and size of real objects.