Psychology studies have demonstrated that by the age of 4–5, young children have developed intricate visual models of the world around them. These internal visual models allow them to outperform advanced computer vision techniques on various object recognition tasks.
Words are important to express ourselves. What we don’t say, however, may be even more instrumental in conveying emotions. Humans can often tell how people around them feel through non-verbal cues embedded in our voice.
Now, researchers in Germany have sought to find out if technical tools, too, can accurately predict emotional undertones in fragments of voice recordings. To do so, they compared three ML models’ accuracy to recognize diverse emotions in audio excepts. Their results were published in Frontiers in Psychology.
“Here we show that machine learning can be used to recognize emotions from audio clips as short as 1.5 seconds,” said the article’s first author Hannes Diemerling, a researcher at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. “Our models achieved an accuracy similar to humans when categorizing meaningless sentences with emotional coloring spoken by actors.”
Whether it’s a powered prosthesis to assist a person who has lost a limb or an independent robot navigating the outside world, we are asking machines to perform increasingly complex, dynamic tasks. But the standard electric motor was designed for steady, ongoing activities like running a compressor or spinning a conveyor belt—even updated designs waste a lot of energy when making more complicated movements.
Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.
“Rather than wasting lots of electricity to just sit there humming away and generating heat, our actuator uses these clutches to achieve the very high levels of efficiency that we see from electric motors in continuous processes, without giving up on controllability and other features that make electric motors attractive,” said Steve Collins, associate professor of mechanical engineering and senior author of the paper.
In a study published in the journal Science Advances, researchers from Peking University have unveiled a miniaturized implantable sensor capable of health monitoring without the need of transcutaneous wires, integrated circuit chips, or bulky readout equipment, thereby reducing infection risks, improving biocompatibility, and enhancing portability. The study is titled “Millimeter-scale magnetic implants paired with a fully integrated wearable device for wireless biophysical and biochemical sensing.”
Robotic exoskeletons designed to help humans with walking or physically demanding work have been the stuff of sci-fi lore for decades. Remember Ellen Ripley in that Power Loader in “Alien”? Or the crazy mobile platform George McFly wore in 2015 in “Back to the Future, Part II” because he threw his back out?
Last Sunday, Liverpool faced Manchester United in the quarter finals of the FA Cup—and in the final minute of extra time, with the score tied at three-all, Liverpool had the crucial opportunity of a corner kick. A goal would surely mean victory, but losing possession could be risky.
A team of roboticists at California Institute of Technology’s Jet Propulsion Laboratory, working with a colleague from Carnegie Mellon University’s, Robotic Institute, has developed a snake-like robot to investigate the terrain on Enceladus, Saturn’s sixth-largest moon.
The best-known byproduct of ultrasound—so named because its frequencies exceed the range of the human ear—is, in fact, not audio but visual: 2D imagery, often of a fetus maturing in the womb. But ultrasound has also found a place in other corners of the medical realm, from assessing blood flow to examining suspicious lumps and diagnosing disease.
Computer scientists at Columbia Engineering have developed a transformative method for detecting AI-generated text. Their findings promise to revolutionize how we authenticate digital content, addressing mounting concerns surrounding large language models (LLMs), digital integrity, misinformation, and trust.
Nvidia presents Driving Everywhere with Large Language Model Policy Adaptation LLaDA is a simple yet powerful tool that enables human drivers and autonomous vehicles alike to by adapting their tasks and motion plans to traffic rules.
Nvidia presents Driving Everywhere with Large Language Model Policy Adaptation.
LLaDA is a simple yet powerful tool that enables human drivers and autonomous vehicles alike to by adapting their tasks and motion plans to traffic rules.
Paper page: https://huggingface.co/papers/2312.14150)