Toggle light / dark theme

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, an annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

Summary: A novel aircraft design pioneered by startup Natilus could dramatically alter the cargo transportation industry, offering larger capacities, reduced emissions, and futuristic remote control options.

In the field of aviation technology, a groundbreaking blended-wing robotic aircraft presents a future where efficient and sustainable cargo planes are the norm. The company pioneering this effort, Natilus, has built a model that harmonizes ecological concerns with the need for faster and cost-effective transportation.

Drawing from the source article, the unconventional plane differs from traditional airliners with its distinct diamond-shaped body that facilitates a more spacious cargo hold. This design enables up to 60 percent more cargo to be carried compared to the current models in use. Furthermore, it notably claims to cut carbon emissions by half, a crucial development for an industry under increasing pressure to become more environmentally friendly.

A pair of roboticists at the Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, in Germany, has found that it is possible to give robots some degree of proprioception using machine-learning techniques. In their study reported in the journal Science Robotics, Fernando Díaz Ledezma and Sami Haddadin developed a new machine-learning approach to allow a robot to learn the specifics of its body.

Giving robots the ability to move around in the real world involves fitting them with technology such as cameras and —data from such devices is then processed and used to direct the legs and/or feet to carry out appropriate actions. This is vastly different from the way animals, including humans, get the job done.

With animals, the brain is aware of its body state—it knows where the hands and legs are, how they work and how they can be used to move around or interact with the environment. Such knowledge is known as proprioception. In this new effort, the researchers conferred similar abilities to robots using .

NVFi tackles the intricate challenge of comprehending and predicting the dynamics within 3D scenes evolving over time, a task critical for applications in augmented reality, gaming, and cinematography. While humans effortlessly grasp the physics and geometry of such scenes, existing computational models struggle to explicitly learn these properties from multi-view videos. The core issue lies in the inability of prevailing methods, including neural radiance fields and their derivatives, to extract and predict future motions based on learned physical rules. NVFi ambitiously aims to bridge this gap by incorporating disentangled velocity fields derived purely from multi-view video frames, a feat yet unexplored in prior frameworks.

The dynamic nature of 3D scenes poses a profound computational challenge. While recent advancements in neural radiance fields showcased exceptional abilities in interpolating views within observed time frames, they fall short in learning explicit physical characteristics such as object velocities. This limitation impedes their capability to foresee future motion patterns accurately. Current studies integrating physics into neural representations exhibit promise in reconstructing scene geometry, appearance, velocity, and viscosity fields. However, these learned physical properties are often intertwined with specific scene elements or necessitate supplementary foreground segmentation masks, limiting their transferability across scenes. NVFi’s pioneering ambition is to disentangle and comprehend the velocity fields within entire 3D scenes, fostering predictive capabilities extending beyond training observations.

Researchers from The Hong Kong Polytechnic University introduce a comprehensive framework NVFi encompassing three fundamental components. First, a keyframe dynamic radiance field facilitates the learning of time-dependent volume density and appearance for every point in 3D space. Second, an interframe velocity field captures time-dependent 3D velocities for each point. Finally, a joint optimization strategy involving both keyframe and interframe elements, augmented by physics-informed constraints, orchestrates the training process. This framework offers flexibility in adopting existing time-dependent NeRF architectures for dynamic radiance field modeling while employing relatively simple neural networks, such as MLPs, for the velocity field. The core innovation lies in the third component, where the joint optimization strategy and specific loss functions enable precise learning of disentangled velocity fields without additional object-specific information or masks.

Scientists have fused human brain tissue to a computer chip, creating a mini cyborg in a petri dish that can perform math equations and recognize speech.

Dubbed Brainoware, the system consists of brain cells artificially grown from human stem cells, which have been fostered to develop into a brain-like tissue. This mini-brain organoid is then hooked up to traditional hardware where it acts as a physical reservoir that can capture and remember the information it receives from the computer inputs.

The researchers wanted to explore the idea of exploiting the efficiency of the human brain’s architecture to supercharge computational hardware. The rise of artificial intelligence (AI) has massively increased the demand for computing power, but it’s somewhat limited by the energy efficiency and performance of the standard silicon chips.

“These kinds of autonomous robotic fleets have a great deal of potential to undertake a wide range of dangerous, dirty, dull, distant, and dear jobs,” say researchers.


Scientists from multiple universities have created Symbiotic Multi-Robot Fleet (SMuRF), a system that allows diverse robots to collaborate, performing challenging tasks unsafe for humans at nuclear sites.

Tesla sparks innovation with wireless inductive home charging for EVs, signaling a bold leap toward a self-driving future.


December 2023 has been a rollercoaster for Tesla, with over two million vehicle recalls due to Autopilot issues. However, Tesla remains committed to advancing autonomous driving technology, and its latest move hints at a futuristic approach – wireless inductive home charging for electric vehicles (EVs).

Teasing a wireless future

Just four days after arriving in Earth’s orbit, China’s Shenlong space plane has been observed releasing six enigmatic “wingmen.”

China’s Shenlong (meaning “Divine Dragon”) robotic space plane has something strange four days after it arrives in Earth’s orbit.


In an unexpected move, China’s Shenlong space plane has been observed deploying 6 ‘wingmen’ objects, some of which are chatting.