Jul 1, 2024
Scientists Create Robot Controlled by Blob of Human Brain Cells
Posted by Roman Kam in category: robotics/AI
Scientists hooked up brain tissue to a neural interface, allowing it to pass on instructions to a humanoid robot body.
Scientists hooked up brain tissue to a neural interface, allowing it to pass on instructions to a humanoid robot body.
WASHINGTON — Turion Space, an Irvine, California-based startup, has secured a $1.9 million contract from SpaceWERX, the U.S. Space Force’s technology arm, to develop an autonomous spacecraft docking and maneuvering system. The contract aims to advance technologies for engaging uncooperative space objects and facilitating the deorbit of inactive satellites.
Ryan Westerdahl, Turion’s co-founder and CEO, said in an interview that the company is focusing on in-space mobility and non-Earth imaging. Turion launched its first satellite, Droid.001, a 32-kilogram spacecraft designed for space situational awareness, in June 2023. Data from this satellite is being integrated into the Space Force’s Unified Data Library.
Westerdahl revealed plans for a demonstration as early as 2026, featuring a Droid mothership hosting “micro-Droid” satellites equipped with the capturing device being developed under the SpaceWERX contract. The micro-Droid, partly funded by NASA, will use grapplers to capture debris objects.
Thermodynamic computing has the potential to revolutionize AI and machine learning by harnessing thermal fluctuations for faster, more efficient, and lower power computing systems Questions to inspire discussion What is the future revolution in computing? —The future revolution in computing is harnessing thermal fluctuations for physics-based computing systems.
With the rapid advancement of artificial intelligence, unmanned systems such as autonomous driving and embodied intelligence are continuously being promoted and applied in real-world scenarios, leading to a new wave of technological revolution and industrial transformation. Visual perception, a core means of information acquisition, plays a crucial role in these intelligent systems. However, achieving efficient, precise, and robust visual perception in dynamic, diverse, and unpredictable environments remains an open challenge.
In open-world scenarios, intelligent systems must not only process vast amounts of data but also handle various extreme events, such as sudden dangers, drastic light changes at tunnel entrances, and strong flash interference at night in driving scenarios.
Traditional visual sensing chips, constrained by the “power wall” and “bandwidth wall,” often face issues of distortion, failure, or high latency when dealing with these scenarios, severely impacting the stability and safety of the system.
Brighter with Herbert.
Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, Dong Yu Tencent AI Lab Science June 2024 https://huggingface.co/papers/2406.
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a…
Join the discussion on this paper page.
Humans learn differently than machines face_with_colon_three
This paper introduces ‘prospective configuration’, a new principle for learning in neural networks, which differs from backpropagation and is more efficient in learning and more consistent with data on neural activity and behavior.
But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.
To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.
The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.
Automated algorithm discovery has been difficult for artificial intelligence given the immense search space of possible functions. Here explainable neural networks are used to discover algorithms that outperform those designed by humans.
Scientists are experimenting to figure out how best to attach ‘living skin’ to robotic faces, and make them smile. The results are, predictably, terrifying.