Toggle light / dark theme

Artificial intelligence can transform medicine in a myriad of ways, including its promise to act as a trusted diagnostic aide to busy clinicians.

Over the past two years, proprietary AI models, also known as closed-source models, have excelled at solving hard-to-crack medical cases that require complex clinical reasoning. Notably, these closed-source AI models have outperformed ones, so-called because their source code is publicly available and can be tweaked and modified by anyone.

Has open-source AI caught up?

In this video, Dr. Ardavan (Ahmad) Borzou will discuss a rising technology in constructing bio-computers for AI tasks, namely Brainoware, which is made of brain organoids interfaced by electronic arrays.

Need help for your data science or math modeling project?
https://compu-flair.com/solution/

🚀 Join the CompuFlair Community! 🚀
📈 Sign up on our website to access exclusive Data Science Roadmap pages — a step-by-step guide to mastering the essential skills for a successful career.
đŸ’ȘAs a member, you’ll receive emails on expert-engineered ChatGPT prompts to boost your data science tasks, be notified of our private problem-solving sessions, and get early access to news and updates.
👉 https://compu-flair.com/user/register.

Comprehensive Python Checklist (machine learning and more advanced libraries will be covered on a different page):
https://compu-flair.com/blogs/program
 — Introduction 02:16 — Von Neumann Bottleneck 03:54 — What is brain organoid 05:09 — Brainoware: reservoir computing for AI 06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory 09:27 — Speech recognition by Brainoware 12:25 — Predicting chaotic motion by Brainoware 13:39 — Summary of Brainoware research 14:35 — Can brain organoids surpass the human brain? 15:51 — Will humans evolve to a body-less stage in their evolution? 16:30 — What is the mathematical model of Brainoware?

00:00 — Introduction.
02:16 — Von Neumann Bottleneck.
03:54 — What is brain organoid.
05:09 — Brainoware: reservoir computing for AI
06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory.
09:27 — Speech recognition by Brainoware.
12:25 — Predicting chaotic motion by Brainoware.
13:39 — Summary of Brainoware research.
14:35 — Can brain organoids surpass the human brain?
15:51 — Will humans evolve to a body-less stage in their evolution?
16:30 — What is the mathematical model of Brainoware?

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.

What happens when AI becomes infinitely smarter than us—constantly upgrading itself at a speed beyond human comprehension? This is the Singularity, a moment where AI surpasses all limits, leaving humanity at a crossroads.
Elon Musk predicts superintelligent AI by 2029, while Ray Kurzweil envisions the Singularity by 2045. But if AI reaches this point, will it be our greatest breakthrough or our greatest threat?
The answer might change everything we know about the future.

Chapters:

00:00 — 01:15 Intro.
01:15 — 03:41 What Is Singularity Paradox?
03:41 — 06:19 How Will Singularity Happen?
06:19 — 09:05 What Will Singularity Look Like?
09:05 — 11:50 How Close Are We?
11:50 — 14:13 Challenges And Criticism.

#AI #Singularity #ArtificialIntelligence #ElonMusk #RayKurzweil #FutureTech

The future of AI is here—and it’s running on human brain cells! In a groundbreaking development, scientists have created the first AI system powered by biological neurons, blurring the line between technology and biology. But what does this mean for the future of artificial intelligence, and how does it work?

This revolutionary AI, known as “Brainoware,” uses lab-grown human brain cells to perform complex tasks like speech recognition and decision-making. By combining the adaptability of biological neurons with the precision of AI algorithms, researchers have unlocked a new frontier in computing. But with this innovation comes ethical questions and concerns about the implications of merging human biology with machines.

In this video, we’ll explore how Brainoware works, its potential applications, and the challenges it faces. Could this be the key to creating truly intelligent machines? Or does it raise red flags about the ethical boundaries of AI research?

What is Brainoware, and how does it work? What are the benefits and risks of AI powered by human brain cells? How will this technology shape the future of AI? This video answers all these questions and more. Don’t miss the full story—watch until the end!

#ai.
#artificialintelligence.
#ainews.

******************

Master AI avatars, video automation, AI graphics, and monetization 👉👉🔗 https://www.skool.com/aicontentlab/about 🚀 New content added monthly!

Scientists have created a groundbreaking AI that uses living human brain cells instead of traditional silicon chips, allowing it to learn and adapt faster than any existing artificial intelligence. Developed by Cortical Labs, this new technology, called Synthetic Biological Intelligence (SBI), combines human neurons and electrodes to create a self-learning system that could revolutionize drug discovery, robotics, and computing. The CL1 AI unit, unveiled in March 2025, operates with minimal energy, doesn’t require an external computer, and is available through Wetware-as-a-Service (WaaS), enabling researchers to run experiments on biological neural networks from anywhere in the world.

🔍 KEY TOPICS
Scientists create an AI using living human brain cells, redefining intelligence and learning.
Cortical Labs’ CL1 unit combines neurons and electrodes for faster, more efficient AI
Breakthrough in Synthetic Biological Intelligence (SBI) with real-world applications in medicine, robotics, and computing.

đŸŽ„ WHAT’S INCLUDED
How human neurons power AI, enabling it to learn and adapt faster than any chip.
The revolutionary CL1 system, a self-contained AI unit that doesn’t need an external computer.
The potential impact of biological AI on drug discovery, robotics, and future technology.

📊 WHY IT MATTERS
This video explores how AI built with human neurons could reshape computing, making systems smarter, more energy-efficient, and capable of human-like learning, raising new possibilities and ethical debates.

DISCLAIMER

Global optimization-based approaches such as basin hopping28,29,30,31, evolutionary algorithms32 and random structure search33 offer principled approaches to comprehensively navigating the ambiguity of active phase. However, these methods usually rely on skillful parameter adjustments and predefined conditions, and face challenges in exploring the entire configuration space and dealing with amorphous structures. The graph theory-based algorithms34,35,36,37, which can enumerate configurations for a specific adsorbate coverage on the surface with graph isomorphism algorithms, even on an asymmetric one. Nevertheless, these methods can only study the adsorbate coverage effect on the surface because the graph representation is insensitive to three-dimensional information, making it unable to consider subsurface and bulk structure sampling. Other geometric-based methods38,39 also have been developed for determining surface adsorption sites but still face difficulties when dealing with non-uniform materials or embedding sites in subsurface.

Topology, independent of metrics or coordinates, presents a novel approach that could potentially offer a comprehensive traversal of structural complexity. Persistent homology, an emerging technique in the field of topological data analysis, bridges the topology and real geometry by capturing geometric structures over various spatial scales through filtration and persistence40. Through embedding geometric information into topological invariants, which are the properties of topological spaces that remain unchanged under specific continuous deformations, it allows the monitoring of the “birth,” “death,” and “persistence” of isolated components, loops, and cavities across all geometric scales using topological measurements. Topological persistence is usually represented by persistent barcodes, where different horizontal line segments or bars denote homology generators41. Persistent homology has been successfully employed to the feature representation for machine learning42,43, molecular science44,45, materials science46,47,48,49,50,51,52,53,54,55, and computational biology56,57. The successful application motivates us to explore its potential as a sampling algorithm due to its capability of characterizing material structures multidimensionally.

In this work, we introduce a topology-based automatic active phase exploration framework, enabling the thorough configuration sampling and efficient computation via MLFF. The core of this framework is a sampling algorithm (PH-SA) in which the persistent homology analysis is leveraged to detect the possible adsorption/embedding sites in space via a bottom-up approach. The PH-SA enables the exploration of interactions between surface, subsurface and even bulk phases with active species, without being limited by morphology and thus can be applied to periodical and amorphous structures. MLFF are then trained through transfer learning to enable rapid structural optimization of sampled configurations. Based on the energetic information, Pourbaix diagram is constructed to describe the response of active phase to external environmental conditions. We validated the effectiveness of the framework with two examples: the formation of Pd hydrides with slab models and the oxidation of Pt clusters in electrochemical conditions. The structure evolution process of these two systems was elucidated by screening 50,000 and 100,000 possible configurations, respectively. The predicted phase diagrams with varying external potentials and their intricate roles in shaping the mechanisms of CO2 electroreduction and oxygen reduction reaction were discussed, demonstrating close alignment with experimental observations. Our algorithm can be easily applied to other heterogeneous catalytic structures of interest and pave the way for the realization of automatic active phase analysis under realistic conditions.

The electrically readable complex dynamics of robust and scalable magnetic tunnel junctions (MTJs) offer promising opportunities for advancing neuromorphic computing. In this work, we present an MTJ design with a free layer and two polarizers capable of computing the sigmoidal activation function and its gradient at the device level. This design enables both feedforward and backpropagation computations within a single device, extending neuromorphic computing frameworks previously explored in the literature by introducing the ability to perform backpropagation directly in hardware. Our algorithm implementation reveals two key findings: (i) the small discrepancies between the MTJ-generated curves and the exact software-generated curves have a negligible impact on the performance of the backpropagation algorithm, (ii) the device implementation is highly robust to inter-device variation and noise, and (iii) the proposed method effectively supports transfer learning and knowledge distillation. To demonstrate this, we evaluated the performance of an edge computing network using weights from a software-trained model implemented with our MTJ design. The results show a minimal loss of accuracy of only 0.4% for the Fashion MNIST dataset and 1.7% for the CIFAR-100 dataset compared to the original software implementation. These results highlight the potential of our MTJ design for compact, hardware-based neural networks in edge computing applications, particularly for transfer learning.