Toggle light / dark theme

Nvidia Instant NeRF: A Tool that Turns 2D Snapshots into a 3D-Rendered Scene

Nvidia’s AI model is pretty impressive: a tool that quickly turns 2D snapshots into a 3D-rendered scene. The tool is called Nvidia Instant NeRF, referring to “neural radiance fields”.


Nvidia’s AI model is pretty impressive: a tool that quickly turns a collection of 2D snapshots into a 3D-rendered scene. The tool is called Instant NeRF, referring to “neural radiance fields”.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The Nvidia research team has developed an approach that accomplishes this task almost instantly making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images.

Quantum computing has a hype problem

Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.


As a buzzword, quantum computing probably ranks only below AI in terms of hype. Large tech companies such as Alphabet, Amazon, and Microsoft now have substantial research and development efforts in quantum computing. A host of startups have sprung up as well, some boasting staggering valuations. IonQ, for example, was valued at $2 billion when it went public in October through a special-purpose acquisition company. Much of this commercial activity has happened with baffling speed over the past three years.

I am as pro-quantum-computing as one can be: I’ve published more than 100 technical papers on the subject, and many of my PhD students and postdoctoral fellows are now well-known quantum computing practitioners all over the world. But I’m disturbed by some of the quantum computing hype I see these days, particularly when it comes to claims about how it will be commercialized.

Japan Wants to Make Half Its Cargo Ships Autonomous by 2040

On top of the environmental concerns, Japan has an added motivation for this push towards automation —its aging population and concurrent low birth rates mean its workforce is rapidly shrinking, and the implications for the country’s economy aren’t good.

Thus it behooves the Japanese to automate as many job functions as they can (and the rest of the world likely won’t be far behind, though they won’t have quite the same impetus). According to the Nippon Foundation, more than half of Japanese ship crew members are over the age of 50.

In partnership with Misui OSK Lines Ltd., the foundation recently completed two tests of autonomous ships. The first was a 313-foot container ship called the Mikage, which sailed 161 nautical miles from Tsuruga Port, north of Kyoto, to Sakai Port near Osaka. Upon reaching its destination port the ship was even able to steer itself into its designated bay, with drones dropping its mooring line.

The biggest problem in AI? Machines have no common sense

What most people define as common sense is actually common learning, and much of that is biased.

The biggest short term problem in AI: as mentioned in the video clip, an over-emphasis on data set size, irrelevant of accuracy, representation or accountability.

The biggest long term problem in AI: Instead of trying to replace us we should be seeking to complement us. Merge is not necessary nor advisable.

If we think about it, building a machine to think like a human is like buying a race horse and insisting for it to function like a camel. And it is doomed to fail. Cos there are only two scenarios: Either humans are replaced or they are not. If we are, then we have failed. If we are not replaced, then the AI development has failed.

Time for a change of direction.

I think of a super intelligent learning machine as a data addict and then it’s easy to see how we fit in.


1000X More Efficient Neural Networks: Building An Artificial Brain With 86 Billion Physical (But Not Biological) Neurons

Which, to me, sounds both unimaginably complex and sublimely simple.

Sort of like, perhaps, like our brains.

Building chips with analogs of biological neurons and dendrites and neural networks like our brains is also key to the massive efficiency gains Rain Neuromorphics is claiming: 1,000 times more efficient than existing digital chips from companies like Nvidia.