Toggle light / dark theme

A UC San Diego team has uncovered a method to decipher neural networks’ learning process, using a statistical formula to clarify how features are learned, a breakthrough that promises more understandable and efficient AI systems. Credit: SciTechDaily.com.

Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team led by data and computer scientists at the University of California San Diego has given neural networks the equivalent of an X-ray to uncover how they actually learn.

The researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions.

A team of researchers at the Leuven-based Neuro-Electronics Research Flanders (NERF) details how two different neuronal populations enable the spinal cord to adapt and recall learned behavior in a way that is completely independent of the brain.

These remarkable findings, published today in Science, shed new light on how spinal circuits might contribute to mastering and automating movement. The insights could prove relevant in the rehabilitation of people with spinal injuries.

The spinal cord modulates and finetunes our actions and movements by integrating different sources of sensory information, and it can do so without input from the brain.

Carnegie Mellon University (CMU) researchers have developed H2O – Human2HumanOid – a reinforcement learning-based framework that allows a full-sized humanoid robot to be teleoperated by a human in real-time using only an RGB camera. Which begs the question: will manual labor soon be performed remotely?

A teleoperated humanoid robot allows for the performance of complex tasks that are – at least at this stage – too complex for a robot to perform independently. But achieving whole-body control of human-sized humanoids to replicate our movements in real-time is a challenging task. That’s where reinforcement learning (RL) comes in.

RL is a machine-learning technique that mimics humans’ trial-and-error approach to learning. Using the reward-and-punishment paradigm of RL, a robot will learn from the feedback of each action they perform and self-discover the best processing paths to achieve the desired outcome. Unlike machine learning, RL doesn’t require humans to label data pairs to direct the algorithm.

Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.