Toggle light / dark theme

A team of researchers at the Leuven-based Neuro-Electronics Research Flanders (NERF) details how two different neuronal populations enable the spinal cord to adapt and recall learned behavior in a way that is completely independent of the brain.

These remarkable findings, published today in Science, shed new light on how spinal circuits might contribute to mastering and automating movement. The insights could prove relevant in the rehabilitation of people with spinal injuries.

The spinal cord modulates and finetunes our actions and movements by integrating different sources of sensory information, and it can do so without input from the brain.

Carnegie Mellon University (CMU) researchers have developed H2O – Human2HumanOid – a reinforcement learning-based framework that allows a full-sized humanoid robot to be teleoperated by a human in real-time using only an RGB camera. Which begs the question: will manual labor soon be performed remotely?

A teleoperated humanoid robot allows for the performance of complex tasks that are – at least at this stage – too complex for a robot to perform independently. But achieving whole-body control of human-sized humanoids to replicate our movements in real-time is a challenging task. That’s where reinforcement learning (RL) comes in.

RL is a machine-learning technique that mimics humans’ trial-and-error approach to learning. Using the reward-and-punishment paradigm of RL, a robot will learn from the feedback of each action they perform and self-discover the best processing paths to achieve the desired outcome. Unlike machine learning, RL doesn’t require humans to label data pairs to direct the algorithm.

Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.

Language models often need more exposure to fruitful mistakes during training, hindering their ability to anticipate consequences beyond the next token. LMs must improve their capacity for complex decision-making, planning, and reasoning. Transformer-based models struggle with planning due to error snowballing and difficulty in lookahead tasks. While some efforts have integrated symbolic search algorithms to address these issues, they merely supplement language models during inference. Yet, enabling language models to search for training could facilitate self-improvement, fostering more adaptable strategies to tackle challenges like error compounding and look-ahead tasks.

Researchers from Stanford University, MIT, and Harvey Mudd have devised a method to teach language models how to search and backtrack by representing the search process as a serialized string, Stream of Search (SoS). They proposed a unified language for search, demonstrated through the game of Countdown. Pretraining a transformer-based language model on streams of search increased accuracy by 25%, while further finetuning with policy improvement methods led to solving 36% of previously unsolved problems. This showcases that language models can learn to solve problems via search, self-improve, and discover new strategies autonomously.

Recent studies integrate language models into search and planning systems, employing them to generate and assess potential actions or states. These methods utilize symbolic search algorithms like BFS or DFS for exploration strategy. However, LMs primarily serve for inference, needing improved reasoning ability. Conversely, in-context demonstrations illustrate search procedures using language, enabling the LM to conduct tree searches accordingly. Yet, these methods are limited by the demonstrated procedures. Process supervision involves training an external verifier model to provide detailed feedback for LM training, outperforming outcome supervision but requiring extensive labeled data.

A multidisciplinary team is teaching dog-like robots to navigate the moon’s craters and other challenging planetary surfaces.

As part of the research funded by NASA, researchers from various universities and NASA Johnson Space Center tested a quadruped named Spirit at Palmer Glacier on Oregon’s Mount Hood.

During five days of testing in the summer of 2023, Spirit traversed various terrains, ambling over, across, and over around shifting earth, mushy snow, and stones with his spindly metal legs.

In a move that directly challenges Nvidia in the lucrative AI training and inference markets, Intel announced its long-anticipated new Intel Gaudi 3 AI accelerator at its Intel Vision event.

The new accelerator offers significant improvements over the previous generation Gaudi 3 processor, promising to bring new competitiveness to training and inference for LLMs and multimodal models.

Gaudi 3 dramatically increases AI compute capabilities, delivering substantial improvements over Gaudi 2 and competitors, particularly in processing BF16 data types, which are crucial for AI workloads.