Toggle light / dark theme

IHMC’s Nadia: A task-ready humanoid robot with a boxing edge

In the exercise, an engineer equipped with a set of virtual reality (VR) goggles is orchestrating the robot’s actions.


Advanced proposition.

Nadia, a cutting-edge humanoid robot, is engineered with a focus on achieving a remarkable power-to-weight ratio and extensive range of motion. This is made possible by leveraging innovative mechanisms and advanced composite materials.

The robot draws its namesake from the renowned gymnast Nadia Comăneci, reflecting the ambitious aim of replicating human range of motion. Funding for Nadia’s development is derived from various sources, encompassing support from the Office of Naval Research (ONR), Army Research Laboratory (ARL), NASA Johnson Space Center, and TARDEC. This diverse funding base underscores the broad interest and recognition of Nadia’s potential applications across military, space exploration, and technological research domains, according to IHMC.

The Fermi Paradox Compendium of Solutions & Terms

Go to https://buyraycon.com/isaacarthur to get 20 to 50% off sitewide! Brought to you by Raycon.
In the grand theater of the cosmos, amidst a myriad of distant suns and ancient galaxies, the Fermi Paradox presents a haunting silence, where a cacophony of alien conversations should exist. Where is Everyone? Or are we alone?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Credits:
The Fermi Paradox Compendium of Solutions & Terms.
Episode 420; November 9, 2023
Written, Produced & Narrated by: Isaac Arthur.
Editors: Donagh Broderick.

Graphics by:
Darth Biomech.
Jeremy Jozwik.
Katie Byrne.
Ken York YD Visual.
Legiontech Studios.
Sergio Botero.
Tactical Blob.
Udo Schroeter.

Music Courtesy of:
Epidemic Sound http://epidemicsound.com/creator.
Markus Junnikkala, “Memory of Earth“
Stellardrone, “Red Giant”, “Ultra Deep Field“
Sergey Cheremisinov, “Labyrinth”, “Forgotten Stars“
Miguel Johnson, “The Explorers”, “Strange New World“
Aerium, “Fifth star of Aldebaran”, “Windmill Forests”, “Deiljocht“
Lombus, “Cosmic Soup“
Taras Harkavyi, “Alpha and…”

0:00:00 Intro.

Meta Just Achieved Mind Reading with AI: A Breakthrough in Brain-Computer Interface Technology

Meta, the parent company of Facebook, has made a groundbreaking development in brain-computer interface technology. They have unveiled an AI system that can decode visual representations and even “hear” what someone is hearing by studying their brainwaves. These advancements in brain-machine interface technology have the potential to transform our relationship with artificial intelligence and its potential applications in healthcare, communication, and virtual reality.

The University of Texas at Austin has developed a new technology that can translate brain activity into written text without surgical implants. This breakthrough uses functional Magnetic Resonance Imaging (fMRI) scan data to reconstruct speech. An AI-based decoder then creates text based on the patterns of neuronal activity that correspond to the intended meaning. This new technology could help people who have lost the ability to speak due to conditions such as stroke or motor neuron disease.

Despite the fMRI having a time lag, which makes tracking brain activity in real-time challenging, the decoder was still able to achieve impressive accuracy. The University of Texas researchers faced challenges in dealing with the inherent “noisiness” of brain signals picked up by sensors, but by employing advanced technology and machine learning, they successfully aligned representations of speech and brain activity. The decoder works at the level of ideas and semantics, providing the gist of thoughts rather than an exact word-for-word translation. This study marks a significant advance in non-invasive brain decoding, showcasing the potential for future applications in neuroscience and communication.

Artificial sensor similar to a human fingerprint that can recognize fine fabric textures

An artificial sensory system that is able to recognize fine textures—such as twill, corduroy and wool—with a high resolution, similar to a human finger, is reported in a Nature Communications paper. The findings may help improve the subtle tactile sensation abilities of robots and human limb prosthetics and could be applied to virtual reality in the future, the authors suggest.

Humans can gently slide a finger on the surface of an object and identify it by capturing both static pressure and high-frequency vibrations. Previous approaches to create artificial tactile for sensing physical stimuli, such as pressure, have been limited in their ability to identify real-world objects upon touch, or they rely on multiple sensors. Creating a artificial sensory system with high spatiotemporal resolution and sensitivity has been challenging.

Chuan Fei Guo and colleagues present a flexible slip sensor that mimics the features of a human fingerprint to enable the system to recognize small features on surface textures when touching or sliding the sensor across the surface. The authors integrated the sensor onto a prosthetic human hand and added machine learning to the system.

Glasses use sonar, AI to interpret upper body poses in 3D

Throughout history, sonar’s distinctive “ping” has been used to map oceans, spot enemy submarines and find sunken ships. Today, a variation of that technology – in miniature form, developed by Cornell researchers – is proving a game-changer in wearable body-sensing technology.

PoseSonic is the latest sonar-equipped wearable from Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) lab. It consists of off-the-shelf eyeglasses outfitted with micro sonar that can track the wearer’s upper body movements in 3D through a combination of inaudible soundwaves and artificial intelligence (AI).

With further development, PoseSonic could enhance augmented reality and virtual reality, and track detailed physical and behavioral data for personal health, the researchers said.

Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality

Researchers have developed a virtual reality application where a range of 3D modeling tools can be opened and controlled using just the movement of a user’s hand.

The researchers, from the University of Cambridge, used machine learning to develop ‘HotGestures’—analogous to the hot keys used in many desktop applications.

HotGestures give users the ability to build figures and shapes in without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.

Rats have an imagination, new research finds

As humans, we live in our thoughts: from pondering what to make for dinner to daydreaming about our last beach vacation. Now, researchers at HHMI’s Janelia Research Campus have found that animals also possess an imagination.

A team from the Lee and Harris labs developed a novel system combining and a to probe a rat’s inner thoughts.

They found that, like humans, animals can think about places and objects that aren’t right in front of them, using their thoughts to imagine walking to a or moving a remote object to a specific spot.

Like Humans — Scientists Discover That Rats Have an Imagination

As human beings, our lives are intertwined with our thoughts, whether we’re contemplating dinner options or indulging in memories of our recent beach getaway.

Interestingly, scientists at HHMI’s Janelia Research Campus have discovered that animals also have an imagination.

A group of researchers from the Lee and Harris laboratories devised an innovative approach that fuses virtual reality with a brain-machine interface to explore the inner thoughts of rats.