Toggle light / dark theme

Muscle tissue meets mechanics in biohybrid hand breakthrough

Combining lab-grown muscle tissue with a series of flexible mechanical joints has led to the development of an artificial hand that can grip and make gestures. The breakthrough shows the way forward for a new kind of robotics with a range of potential applications.

While we’ve seen plenty of soft robots at New Atlas and a truly inspiring range of mechanical prosthetics, we’ve yet to see too many inventions that quite literally combine human tissue with machines. That’s likely because the world of biohybrid science is still in its very early stages. Sure, there was an artificial fish powered by human heart cells and a robot that used a locust’s ear to hear, but in terms of the practical use of the technology, the field has remained somewhat empty.

Now though, researchers at the University of Tokyo and Waseda University in Japan have shown a breakthrough demonstrating the real promise of the technology.

Deep learning provides new view on 300 million years of brain evolution

In a new study published in Science, a Belgian research team explores how genetic switches controlling gene activity define brain cell types across species. They trained deep learning models on human, mouse, and chicken brain data and found that while some cell types are highly conserved between birds and mammals after millions of years of evolution, others have evolved differently.

The findings not only shed new light on evolution; they also provide powerful tools for studying how shapes different cell types, across species or different disease states.

Our brain, and by extension our entire body, is made up of many different types of cells. While they share the same DNA, all these cell types have their own shape and function. What makes each cell type different is a complex puzzle that researchers have been trying to put together for decades from short DNA sequences that act like switches, controlling which genes are turned on or off.

Deep Learning

As the age of technology continues to explode, it is essential that we do not gloss over the amount of learning and skill it takes to address the ever-increasing complexity of technology, society and business. This moment affords us a unique opportunity. To design our learning levels and to design our professionals. I thought I would take that opportunity to show some of the skills necessary in architecture and how important they are to creating the next generation of leaders.

As one person said to me just yesterday, “The current business environment does not allow the application of such deep learning and reflection in architecture. We have to get in and do what we can fast.” I hear similar quotes regularly. And that is ok, there are times when we have to move quickly. But there are many more times we need a deeply experienced professional to be able to move quickly!

What does it mean to learn a skill? It means to have repeated success at that competency, over and over with the guidance of someone even more experienced. It means understanding theory, practice, and what can go wrong!

EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents

Abstract: Leveraging Multi-modal Large Language Models (MLLMs) to create embodied agents offers a promising avenue for tackling real-world tasks. While language-centric embodied agents have garnered substantial attention, MLLM-based embodied agents remain underexplored due to the lack of comprehensive evaluation frameworks. To bridge this gap, we introduce EmbodiedBench, an extensive benchmark designed to evaluate vision-driven embodied agents. EmbodiedBench features: a diverse set of 1,128 testing tasks across four environments, ranging from high-level semantic tasks (e.g., household) to low-level tasks involving atomic actions (e.g., navigation and manipulation); and six meticulously curated subsets evaluating essential agent capabilities like commonsense reasoning, complex instruction understanding, spatial awareness, visual perception, and long-term planning. Through extensive experiments, we evaluated 13 leading proprietary and open-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel at high-level tasks but struggle with low-level manipulation, with the best model, GPT-4o, scoring only 28.9% on average. EmbodiedBench provides a multifaceted standardized evaluation platform that not only highlights existing challenges but also offers valuable insights to advance MLLM-based embodied agents. Our code is available at this https URL.

From: Rui Yang [view email].

If we want artificial “superintelligence,” it may need to feel pain

“It might be that to get superhuman intelligence, you do need some level of sentience. We can’t rule that out either; it’s entirely possible. Some people argue that that kind of real intelligence requires sentience and that sentience requires embodiment. Now, there is a view in philosophy, called computational functionalism, that [argues] sentience, sapience, and selfhood could just be the computations they perform rather than the body they’re situated in. And if that view is correct, then it’s entirely possible that by recreating the computations the brain performs in AI systems, we also thereby recreate the sentience as well.”

Birch is saying three things here. First, it’s reasonable to suggest that “superintelligence” requires sentience. Second, we could potentially recreate sentience in AI with certain computations. Therefore, if we want AI to reach “superintelligence” we would need it to be sentient. We would need AI to feel things. ChatGPT needs to know pain. Gemini needs to experience euphoria.

The fact that underlies Birch’s book and our conversation is that intelligence is not some deus ex machina dropped from the sky. It is not some curious alien artifact uncovered in a long, lost tomb. It’s nested within an unfathomably long evolutionary chain. It’s the latest word in a long sentence. But the question Birch raises is: Where does AI fit in the book of evolved intelligence?