Toggle light / dark theme

An evolving robotics encyclopedia characterizes robots based on their performance

Over the past decades, roboticists have introduced a wide range of systems with distinct body structures and varying capabilities. As the number of developed robots continuously grows, being able to easily learn about these many systems, their unique characteristics, differences and performance on specific tasks could prove highly valuable.

Researchers at Technical University of Munich (TUM) recently created the “Tree of Robots,” a new encyclopedia that could make learning about existing and comparing them significantly easier. Their robot encyclopedia, introduced in a paper published in Nature Machine Intelligence, categorizes robots based on their performance fitness on various tasks.

“The aspiration for that can understand their environment as we humans do, and execute tasks independently, has existed for ages,” Robin Jeanne Kirschner, first author of the paper, told Tech Xplore.

China’s Baidu releases new AI model to compete with DeepSeek

Chinese internet search giant Baidu released a new artificial intelligence reasoning model Sunday and made its AI chatbot services free to consumers as ferocious competition grips the sector.

Technology companies in China have been scrambling to release improved AI platforms since start-up DeepSeek shocked its rivals with its and highly cost-efficient model in January.

In a post on WeChat, Baidu announced the launch of its latest X1 reasoning model—which the company claims performs similarly to DeepSeek’s but for lower cost—and a new foundation model, Ernie 4.5.

Open-source AI matches top proprietary model in solving tough medical cases

Artificial intelligence can transform medicine in a myriad of ways, including its promise to act as a trusted diagnostic aide to busy clinicians.

Over the past two years, proprietary AI models, also known as closed-source models, have excelled at solving hard-to-crack medical cases that require complex clinical reasoning. Notably, these closed-source AI models have outperformed ones, so-called because their source code is publicly available and can be tweaked and modified by anyone.

Has open-source AI caught up?

This Brain Organoid Shockingly Outperforms Our AI!

In this video, Dr. Ardavan (Ahmad) Borzou will discuss a rising technology in constructing bio-computers for AI tasks, namely Brainoware, which is made of brain organoids interfaced by electronic arrays.

Need help for your data science or math modeling project?
https://compu-flair.com/solution/

🚀 Join the CompuFlair Community! 🚀
📈 Sign up on our website to access exclusive Data Science Roadmap pages — a step-by-step guide to mastering the essential skills for a successful career.
💪As a member, you’ll receive emails on expert-engineered ChatGPT prompts to boost your data science tasks, be notified of our private problem-solving sessions, and get early access to news and updates.
👉 https://compu-flair.com/user/register.

Comprehensive Python Checklist (machine learning and more advanced libraries will be covered on a different page):
https://compu-flair.com/blogs/program… — Introduction 02:16 — Von Neumann Bottleneck 03:54 — What is brain organoid 05:09 — Brainoware: reservoir computing for AI 06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory 09:27 — Speech recognition by Brainoware 12:25 — Predicting chaotic motion by Brainoware 13:39 — Summary of Brainoware research 14:35 — Can brain organoids surpass the human brain? 15:51 — Will humans evolve to a body-less stage in their evolution? 16:30 — What is the mathematical model of Brainoware?

00:00 — Introduction.
02:16 — Von Neumann Bottleneck.
03:54 — What is brain organoid.
05:09 — Brainoware: reservoir computing for AI
06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory.
09:27 — Speech recognition by Brainoware.
12:25 — Predicting chaotic motion by Brainoware.
13:39 — Summary of Brainoware research.
14:35 — Can brain organoids surpass the human brain?
15:51 — Will humans evolve to a body-less stage in their evolution?
16:30 — What is the mathematical model of Brainoware?

How to Remote Control a Human Being | Misha Sra | TEDxBeaconStreetSalon

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.

Singularity Paradox | What If AI Becomes Too Powerful?

What happens when AI becomes infinitely smarter than us—constantly upgrading itself at a speed beyond human comprehension? This is the Singularity, a moment where AI surpasses all limits, leaving humanity at a crossroads.
Elon Musk predicts superintelligent AI by 2029, while Ray Kurzweil envisions the Singularity by 2045. But if AI reaches this point, will it be our greatest breakthrough or our greatest threat?
The answer might change everything we know about the future.

Chapters:

00:00 — 01:15 Intro.
01:15 — 03:41 What Is Singularity Paradox?
03:41 — 06:19 How Will Singularity Happen?
06:19 — 09:05 What Will Singularity Look Like?
09:05 — 11:50 How Close Are We?
11:50 — 14:13 Challenges And Criticism.

#AI #Singularity #ArtificialIntelligence #ElonMusk #RayKurzweil #FutureTech

The First AI that Runs on Human Brain Cells is Finally Here!

The future of AI is here—and it’s running on human brain cells! In a groundbreaking development, scientists have created the first AI system powered by biological neurons, blurring the line between technology and biology. But what does this mean for the future of artificial intelligence, and how does it work?

This revolutionary AI, known as “Brainoware,” uses lab-grown human brain cells to perform complex tasks like speech recognition and decision-making. By combining the adaptability of biological neurons with the precision of AI algorithms, researchers have unlocked a new frontier in computing. But with this innovation comes ethical questions and concerns about the implications of merging human biology with machines.

In this video, we’ll explore how Brainoware works, its potential applications, and the challenges it faces. Could this be the key to creating truly intelligent machines? Or does it raise red flags about the ethical boundaries of AI research?

What is Brainoware, and how does it work? What are the benefits and risks of AI powered by human brain cells? How will this technology shape the future of AI? This video answers all these questions and more. Don’t miss the full story—watch until the end!

#ai.
#artificialintelligence.
#ainews.

******************