Destro Robotics
A new study suggests that advanced AI models are pretty good at acting dumber than they are — which might have massive implications as they continue to get smarter.
Published in the journal PLOS One, researchers from Berlin’s Humboldt University found that when testing out a large language model (LLM) on so-called “theory of mind” criteria, they found that not only can AI mimic the language learning stages exhibited in children, but seem to express something akin to the mental capabilities related to those stages as well.
In an interview with PsyPost, Humboldt University research assistant and main study author Anna Maklová, who also happens to be a psycholinguistics expert, explained how her field of study relates to the fascinating finding.
A UC San Diego team has uncovered a method to decipher neural networks’ learning process, using a statistical formula to clarify how features are learned, a breakthrough that promises more understandable and efficient AI systems. Credit: SciTechDaily.com.
Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team led by data and computer scientists at the University of California San Diego has given neural networks the equivalent of an X-ray to uncover how they actually learn.
The researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions.
From google deepmind, stanford, & georgia tech.
Best practices and lessons learned on synthetic data for language models.
The success of #AI models relies on the availability of #large, #diverse, and high-quality #datasets, which can be challenging to obtain…
Join the discussion on this paper page.
Posted in robotics/AI
The company is aiming to ramp up its artificial intelligence efforts and reduce reliance on outside suppliers like Nvidia.
A team of researchers at the Leuven-based Neuro-Electronics Research Flanders (NERF) details how two different neuronal populations enable the spinal cord to adapt and recall learned behavior in a way that is completely independent of the brain.
These remarkable findings, published today in Science, shed new light on how spinal circuits might contribute to mastering and automating movement. The insights could prove relevant in the rehabilitation of people with spinal injuries.
The spinal cord modulates and finetunes our actions and movements by integrating different sources of sensory information, and it can do so without input from the brain.
Nvidia is coming for Intel’s crown. Samsung is losing ground. AI is transforming the space. We break down revenue for semiconductor companies.
Carnegie Mellon University (CMU) researchers have developed H2O – Human2HumanOid – a reinforcement learning-based framework that allows a full-sized humanoid robot to be teleoperated by a human in real-time using only an RGB camera. Which begs the question: will manual labor soon be performed remotely?
A teleoperated humanoid robot allows for the performance of complex tasks that are – at least at this stage – too complex for a robot to perform independently. But achieving whole-body control of human-sized humanoids to replicate our movements in real-time is a challenging task. That’s where reinforcement learning (RL) comes in.
RL is a machine-learning technique that mimics humans’ trial-and-error approach to learning. Using the reward-and-punishment paradigm of RL, a robot will learn from the feedback of each action they perform and self-discover the best processing paths to achieve the desired outcome. Unlike machine learning, RL doesn’t require humans to label data pairs to direct the algorithm.
Egineers develop a new hybrid robot that hops, flies, adjusts jump heights, and executes tight turns with high frequency and agility.