Toggle light / dark theme

The future of AI is here—and it’s running on human brain cells! In a groundbreaking development, scientists have created the first AI system powered by biological neurons, blurring the line between technology and biology. But what does this mean for the future of artificial intelligence, and how does it work?

This revolutionary AI, known as “Brainoware,” uses lab-grown human brain cells to perform complex tasks like speech recognition and decision-making. By combining the adaptability of biological neurons with the precision of AI algorithms, researchers have unlocked a new frontier in computing. But with this innovation comes ethical questions and concerns about the implications of merging human biology with machines.

In this video, we’ll explore how Brainoware works, its potential applications, and the challenges it faces. Could this be the key to creating truly intelligent machines? Or does it raise red flags about the ethical boundaries of AI research?

What is Brainoware, and how does it work? What are the benefits and risks of AI powered by human brain cells? How will this technology shape the future of AI? This video answers all these questions and more. Don’t miss the full story—watch until the end!

#ai.
#artificialintelligence.
#ainews.

******************

Master AI avatars, video automation, AI graphics, and monetization 👉👉🔗 https://www.skool.com/aicontentlab/about 🚀 New content added monthly!

Scientists have created a groundbreaking AI that uses living human brain cells instead of traditional silicon chips, allowing it to learn and adapt faster than any existing artificial intelligence. Developed by Cortical Labs, this new technology, called Synthetic Biological Intelligence (SBI), combines human neurons and electrodes to create a self-learning system that could revolutionize drug discovery, robotics, and computing. The CL1 AI unit, unveiled in March 2025, operates with minimal energy, doesn’t require an external computer, and is available through Wetware-as-a-Service (WaaS), enabling researchers to run experiments on biological neural networks from anywhere in the world.

🔍 KEY TOPICS
Scientists create an AI using living human brain cells, redefining intelligence and learning.
Cortical Labs’ CL1 unit combines neurons and electrodes for faster, more efficient AI
Breakthrough in Synthetic Biological Intelligence (SBI) with real-world applications in medicine, robotics, and computing.

🎥 WHAT’S INCLUDED
How human neurons power AI, enabling it to learn and adapt faster than any chip.
The revolutionary CL1 system, a self-contained AI unit that doesn’t need an external computer.
The potential impact of biological AI on drug discovery, robotics, and future technology.

📊 WHY IT MATTERS
This video explores how AI built with human neurons could reshape computing, making systems smarter, more energy-efficient, and capable of human-like learning, raising new possibilities and ethical debates.

DISCLAIMER

Compare news coverage. Spot media bias. Avoid algorithms. Try Ground News today and get 40% off your subscription by going to https://ground.news/upandatom.

Special thanks to Chuankun Zhang, Tian Ooi, Jacob S. Higgins, and Jack F. Doyle from Prof. Jun Ye’s lab at JILA/NIST/University of Colorado, as well as Prof. Victor Flambaum from UNSW’s Department of Theoretical Physics, for their valuable assistance and consultation on this video.

Hi! I’m Jade. If you’d like to consider supporting Up and Atom, head over to my Patreon page smile
https://www.patreon.com/upandatom.

Visit the Up and Atom store.
https://store.nebula.app/collections/up-and-atom.

https://www.youtube.com/c/upandatom.

For a one time donation, head over to my PayPal smile https://www.paypal.me/upandatomshows.

Global optimization-based approaches such as basin hopping28,29,30,31, evolutionary algorithms32 and random structure search33 offer principled approaches to comprehensively navigating the ambiguity of active phase. However, these methods usually rely on skillful parameter adjustments and predefined conditions, and face challenges in exploring the entire configuration space and dealing with amorphous structures. The graph theory-based algorithms34,35,36,37, which can enumerate configurations for a specific adsorbate coverage on the surface with graph isomorphism algorithms, even on an asymmetric one. Nevertheless, these methods can only study the adsorbate coverage effect on the surface because the graph representation is insensitive to three-dimensional information, making it unable to consider subsurface and bulk structure sampling. Other geometric-based methods38,39 also have been developed for determining surface adsorption sites but still face difficulties when dealing with non-uniform materials or embedding sites in subsurface.

Topology, independent of metrics or coordinates, presents a novel approach that could potentially offer a comprehensive traversal of structural complexity. Persistent homology, an emerging technique in the field of topological data analysis, bridges the topology and real geometry by capturing geometric structures over various spatial scales through filtration and persistence40. Through embedding geometric information into topological invariants, which are the properties of topological spaces that remain unchanged under specific continuous deformations, it allows the monitoring of the “birth,” “death,” and “persistence” of isolated components, loops, and cavities across all geometric scales using topological measurements. Topological persistence is usually represented by persistent barcodes, where different horizontal line segments or bars denote homology generators41. Persistent homology has been successfully employed to the feature representation for machine learning42,43, molecular science44,45, materials science46,47,48,49,50,51,52,53,54,55, and computational biology56,57. The successful application motivates us to explore its potential as a sampling algorithm due to its capability of characterizing material structures multidimensionally.

In this work, we introduce a topology-based automatic active phase exploration framework, enabling the thorough configuration sampling and efficient computation via MLFF. The core of this framework is a sampling algorithm (PH-SA) in which the persistent homology analysis is leveraged to detect the possible adsorption/embedding sites in space via a bottom-up approach. The PH-SA enables the exploration of interactions between surface, subsurface and even bulk phases with active species, without being limited by morphology and thus can be applied to periodical and amorphous structures. MLFF are then trained through transfer learning to enable rapid structural optimization of sampled configurations. Based on the energetic information, Pourbaix diagram is constructed to describe the response of active phase to external environmental conditions. We validated the effectiveness of the framework with two examples: the formation of Pd hydrides with slab models and the oxidation of Pt clusters in electrochemical conditions. The structure evolution process of these two systems was elucidated by screening 50,000 and 100,000 possible configurations, respectively. The predicted phase diagrams with varying external potentials and their intricate roles in shaping the mechanisms of CO2 electroreduction and oxygen reduction reaction were discussed, demonstrating close alignment with experimental observations. Our algorithm can be easily applied to other heterogeneous catalytic structures of interest and pave the way for the realization of automatic active phase analysis under realistic conditions.

The electrically readable complex dynamics of robust and scalable magnetic tunnel junctions (MTJs) offer promising opportunities for advancing neuromorphic computing. In this work, we present an MTJ design with a free layer and two polarizers capable of computing the sigmoidal activation function and its gradient at the device level. This design enables both feedforward and backpropagation computations within a single device, extending neuromorphic computing frameworks previously explored in the literature by introducing the ability to perform backpropagation directly in hardware. Our algorithm implementation reveals two key findings: (i) the small discrepancies between the MTJ-generated curves and the exact software-generated curves have a negligible impact on the performance of the backpropagation algorithm, (ii) the device implementation is highly robust to inter-device variation and noise, and (iii) the proposed method effectively supports transfer learning and knowledge distillation. To demonstrate this, we evaluated the performance of an edge computing network using weights from a software-trained model implemented with our MTJ design. The results show a minimal loss of accuracy of only 0.4% for the Fashion MNIST dataset and 1.7% for the CIFAR-100 dataset compared to the original software implementation. These results highlight the potential of our MTJ design for compact, hardware-based neural networks in edge computing applications, particularly for transfer learning.

In just over three years since its launch, NASA’s James Webb Space Telescope (JWST) has generated significant and unprecedented insights into the far reaches of space, and a new study by a Kansas State University researcher provides one of the simplest and most puzzling observations of the deep universe yet.

In images of the deep universe taken by the James Webb Space Telescope Advanced Deep Extragalactic Survey, the vast majority of the galaxies rotate in the same direction, according to research by Lior Shamir, associate professor of computer science at the Carl R. Ice College of Engineering. About two thirds of the galaxies rotate clockwise, while just about a third of the galaxies rotate counterclockwise.

The study— published in Monthly Notices of the Royal Astronomical Society —was done with 263 galaxies in the JADES field that were clear enough to identify their direction of rotation.

A Kansas State University engineer recently published results from an observational study in support of a century-old theory that directly challenges the validity of the Big Bang theory.

Lior Shamir, associate professor of computer science, used imaging from a trio of telescopes and more than 30,000 galaxies to measure the redshift of galaxies based on their distance from Earth. Redshift is the change in the frequency of waves that a galaxy emits, which use to gauge a galaxy’s speed.

Shamir’s findings lend support to the century-old “tired light” theory instead of the Big Bang. The findings are published in the journal Particles.

It is a deep question, from deep in our history: when did human language as we know it emerge? A new survey of genomic evidence suggests our unique language capacity was present at least 135,000 years ago. Subsequently, language might have entered social use 100,000 years ago.

Our species, Homo sapiens, is about 230,000 years old. Estimates of when language originated vary widely, based on different forms of evidence, from fossils to cultural artifacts. The authors of the new analysis took a different approach. They reasoned that since all human languages likely have a —as the researchers strongly think—the key question is how far back in time regional groups began spreading around the world.

“The logic is very simple,” says Shigeru Miyagawa, an MIT professor and co-author of a new paper summarizing the results.

A team of quantum computer researchers at quantum computer maker D-Wave, working with an international team of physicists and engineers, is claiming that its latest quantum processor has been used to run a quantum simulation faster than could be done with a classical computer.

In their paper published in the journal Science, the group describes how they ran a quantum version of a mathematical approximation regarding how matter behaves when it changes states, such as from a gas to a liquid—in a way that they claim would be nearly impossible to conduct on a traditional computer.

Over the past several years, D-Wave has been working on developing quantum annealers, which are a subtype of quantum computer created to solve very specific types of problems. Notably, landmark claims made by researchers at the company have at times been met with skepticism by others in the field.