Toggle light / dark theme

NVFi tackles the intricate challenge of comprehending and predicting the dynamics within 3D scenes evolving over time, a task critical for applications in augmented reality, gaming, and cinematography. While humans effortlessly grasp the physics and geometry of such scenes, existing computational models struggle to explicitly learn these properties from multi-view videos. The core issue lies in the inability of prevailing methods, including neural radiance fields and their derivatives, to extract and predict future motions based on learned physical rules. NVFi ambitiously aims to bridge this gap by incorporating disentangled velocity fields derived purely from multi-view video frames, a feat yet unexplored in prior frameworks.

The dynamic nature of 3D scenes poses a profound computational challenge. While recent advancements in neural radiance fields showcased exceptional abilities in interpolating views within observed time frames, they fall short in learning explicit physical characteristics such as object velocities. This limitation impedes their capability to foresee future motion patterns accurately. Current studies integrating physics into neural representations exhibit promise in reconstructing scene geometry, appearance, velocity, and viscosity fields. However, these learned physical properties are often intertwined with specific scene elements or necessitate supplementary foreground segmentation masks, limiting their transferability across scenes. NVFi’s pioneering ambition is to disentangle and comprehend the velocity fields within entire 3D scenes, fostering predictive capabilities extending beyond training observations.

Researchers from The Hong Kong Polytechnic University introduce a comprehensive framework NVFi encompassing three fundamental components. First, a keyframe dynamic radiance field facilitates the learning of time-dependent volume density and appearance for every point in 3D space. Second, an interframe velocity field captures time-dependent 3D velocities for each point. Finally, a joint optimization strategy involving both keyframe and interframe elements, augmented by physics-informed constraints, orchestrates the training process. This framework offers flexibility in adopting existing time-dependent NeRF architectures for dynamic radiance field modeling while employing relatively simple neural networks, such as MLPs, for the velocity field. The core innovation lies in the third component, where the joint optimization strategy and specific loss functions enable precise learning of disentangled velocity fields without additional object-specific information or masks.

Growing old may come with more aches and pains attached, but new research suggests there’s a bigger picture to look at: by reaching our dotage, we might actually be helping the evolution of our species.

Once assumed to be an inevitable consequence of living in a rough-and-tumble world, aging is now considered something of a mystery. Some species barely age at all, for example. One of the big questions is whether aging is simply a by-product of biology, or something that comes with an evolutionary advantage.

The new research is based on a computer model developed by a team from the HUN-REN Centre for Ecological Research in Hungary which suggests old age can be positively selected for in the same way as other traits.

In this episode of the Smarter Not Harder Podcast, our guest Alex Rosenberg joins our host Boomer Anderson to give one-cent solutions to life’s $64,000 questions that include:\
\
What are the definitions of scientism and naturalism?\
Is there such a thing as free will, and if so, what implications does it have on the search for purpose in life?\
What is nice nihilism?\
\
Alex Rosenberg is an American philosopher and novelist. He is the R. Taylor Cole Professor of Philosophy at Duke University, and is well known for contributions in the philosophy of biology, as well as the philosophy of economics. He has also written several books, including \.

Scientists have fused human brain tissue to a computer chip, creating a mini cyborg in a petri dish that can perform math equations and recognize speech.

Dubbed Brainoware, the system consists of brain cells artificially grown from human stem cells, which have been fostered to develop into a brain-like tissue. This mini-brain organoid is then hooked up to traditional hardware where it acts as a physical reservoir that can capture and remember the information it receives from the computer inputs.

The researchers wanted to explore the idea of exploiting the efficiency of the human brain’s architecture to supercharge computational hardware. The rise of artificial intelligence (AI) has massively increased the demand for computing power, but it’s somewhat limited by the energy efficiency and performance of the standard silicon chips.

“Now, it is possible to teleport information so that it never physically travels across the connection — a “Star Trek” technology made real,” said researcher.


Scientists have been making discoveries in the quantum computing realm. In another leap, researchers successfully deployed the principles of quantum physics and transported information in the form of light patterns without physically moving the image itself.

According to a statement by the researchers, scientists demonstrated the quantum transport of the highest dimensionality of information to date. Particularly highlighting, the use of a teleportation-inspired configuration so that the information does not physically travel between the two communicating parties.

Entangling photons and nonlinear optical detector

“These kinds of autonomous robotic fleets have a great deal of potential to undertake a wide range of dangerous, dirty, dull, distant, and dear jobs,” say researchers.


Scientists from multiple universities have created Symbiotic Multi-Robot Fleet (SMuRF), a system that allows diverse robots to collaborate, performing challenging tasks unsafe for humans at nuclear sites.