Toggle light / dark theme

Scientists Overturn 20 Years of Textbook Biology With Stunning Discovery About Cell Division

Scientists have uncovered an unexpected function for a crucial protein involved in cell division. Reported in two consecutive publications, the finding challenges long-accepted models and standard descriptions found in biology textbooks. Researchers at the Ruđer Bošković Institute (RBI) in Zagreb

The Intelligence Foundation Model Could Be The Bridge To Human Level AI

Cai Borui and Zhao Yao from Deakin University (Australia) presented a concept that they believe will bridge the gap between modern chatbots and general-purpose AI. Their proposed “Intelligence Foundation Model” (IFM) shifts the focus of AI training from merely learning surface-level data patterns to mastering the universal mechanisms of intelligence itself. By utilizing a biologically inspired “State Neural Network” architecture and a “Neuron Output Prediction” learning objective, the framework is designed to mimic the collective dynamics of biological brains and internalize how information is processed over time. This approach aims to overcome the reasoning limitations of current Large Language Models, offering a scalable path toward true Artificial General Intelligence (AGI) and theoretically laying the groundwork for the future convergence of biological and digital minds.


The Intelligence Foundation Model represents a bold new proposal in the quest to build machines that can truly think. We currently live in an era dominated by Large Language Models like ChatGPT and Gemini. These systems are incredibly impressive feats of engineering that can write poetry, solve coding errors, and summarize history. However, despite their fluency, they often lack the fundamental spark of what we consider true intelligence.

They are brilliant mimics that predict statistical patterns in text but do not actually understand the world or learn from it in real-time. A new research paper suggests that to get to the next level, we need to stop modeling language and start modeling the brain itself.

Borui Cai and Yao Zhao have introduced a concept they believe will bridge the gap between today’s chatbots and Artificial General Intelligence. Published in a preprint on arXiv, their research argues that existing foundation models suffer from severe limitations because they specialize in specific domains like vision or text. While a chatbot can tell you what a bicycle is, it does not understand the physics of riding one in the way a human does.

Asimov Press (@asimovpress)

We just released a curated list of 125+ essays about biology and science. These articles cover pharmaceuticals, the history of molecular biology, timeless arguments and theories, and more. All of them inspired or taught or challenged us to think more deeply.

Check it out here on Substack or on our custom website: https://read.asimov.com

Michael Levin: Unconventional Embodiments: model systems… (ECSU OIST)

Youtube caption.

On 11th of December 2025, Michael Levin gave a talk as part of the ECogS Conference hosted by The Embodied Cognitive Science Unit at OIST.

Title: Unconventional Embodiments: model systems and strategies for addressing mind-blindness.

Abstract: One of the most salient aspects of any agent’s environment is the question of how many, what kind, and what degree of agency exists in it. It is as relevant to biological organisms as to robots in human environments. It is also critical for scientists, philosophers, and engineers, as well as for human societies which will increasingly contain modified, synthetic, and hybrid beings of every possible description. In this talk I will argue that our evolutionary history has left us with significant mind-blindness, which makes it difficult for us to recognize minds of unfamiliar scales, problem spaces, or embodiments. I will describe our lab’s work to develop conceptual tools for recognizing and communicating with diverse intelligences. I will also present recent data from our new synthetic proto-organisms, in which we test those ideas by creating and studying the behavioral properties of beings who have not been specifically selected for them. I will conclude the talk with a speculative idea about the latent space from which novel intrinsic motivations ingress into physical, biological, and computational systems.

Youth with mental health conditions share strikingly similar brain changes, regardless of diagnosis

An international study—the largest of its kind—has uncovered similar structural changes in the brains of young people diagnosed with anxiety disorders, depression, ADHD and conduct disorder, offering new insights into the biological roots of mental health conditions in children and young people.

Led by Dr. Sophie Townend, a researcher in the Department of Psychology at the University of Bath, the study, published today in the journal Biological Psychiatry, analyzed from almost 9,000 children and adolescents—around half with a diagnosed mental health condition—to identify both shared and disorder-specific alterations in brain structure across four of the most common psychiatric disorders in childhood and adolescence.

Among several key findings, the researchers identified common brain changes across all four disorders—notably, a reduced surface area in regions critical for processing emotions, responding to threats and maintaining awareness of bodily states.

Brain organoid pioneers fear inflated claims about biocomputing could backfire

For the brain organoids in Lena Smirnova’s lab at Johns Hopkins University, there comes a time in their short lives when they must graduate from the cozy bath of the bioreactor, leave the warm, salty broth behind, and be plopped onto a silicon chip laced with microelectrodes. From there, these tiny white spheres of human tissue can simultaneously send and receive electrical signals that, once decoded by a computer, will show how the cells inside them are communicating with each other as they respond to their new environments.

More and more, it looks like these miniature lab-grown brain models are able to do things that resemble the biological building blocks of learning and memory. That’s what Smirnova and her colleagues reported earlier this year. It was a step toward establishing something she and her husband and collaborator, Thomas Hartung, are calling “organoid intelligence.”

Tead More


Another would be to leverage those functions to build biocomputers — organoid-machine hybrids that do the work of the systems powering today’s AI boom, but without all the environmental carnage. The idea is to harness some fraction of the human brain’s stunning information-processing superefficiencies in place of building more water-sucking, electricity-hogging, supercomputing data centers.

Despite widespread skepticism, it’s an idea that’s started to gain some traction. Both the National Science Foundation and DARPA have invested millions of dollars in organoid-based biocomputing in recent years. And there are a handful of companies claiming to have built cell-based systems already capable of some form of intelligence. But to the scientists who first forged the field of brain organoids to study psychiatric and neurodevelopmental disorders and find new ways to treat them, this has all come as a rather unwelcome development.

At a meeting last week at the Asilomar conference center in California, researchers, ethicists, and legal experts gathered to discuss the ethical and social issues surrounding human neural organoids, which fall outside of existing regulatory structures for research on humans or animals. Much of the conversation circled around how and where the field might set limits for itself, which often came back to the question of how to tell when lab-cultured cellular constructs have started to develop sentience, consciousness, or other higher-order properties widely regarded as carrying moral weight.

How to See the Dead

Looking at the bench and readings, he concluded that the previous night’s firmware update had introduced a timing mismatch. The wires hadn’t burnt out, but the clock that told them when to fire had been off by a microsecond, so the expected voltage response never lined up. He suspected half the channels had dropped out, even though the hardware itself wasn’t damaged. Fifteen minutes and a simple firmware rollback later, and everything worked perfectly.

Now, Lyre and I swapped the saline for neuron cultures to check if the wires could trigger and record real biological data. While we confirmed, Aux fine-tuned his AI encoder and processed April’s data.

We were finally ready to test the integrated system, without yet risking its insertion into April’s brain. We built something we only half jokingly called a “phantom cortex,” a benchtop stand-in: a synthetic cortical sheet of cultured neurons on a chip designed to act as April’s visual cortex. On one side, we put a lab-grown retinal implant that carried live sensory input. On the other, Aux’s playback device pushed reconstructed memories. The phantom cortex’s visual field was rendered on a lab monitor so that we could assess the pattern projections. The phantom cortex rig buzzing faintly in the background, gelled neuron sheets twitching under the microscope with each ripple of charge.

Speaking more than one language may help the brain stay younger

Speaking more than one language can slow down the brain’s aging and lower risks linked to accelerated aging.

In a new study, researchers analyzed the Biobehavioral Age Gap (BAG) —a person’s biological age using health and lifestyle data, then compared it to their actual age—of over 80,000 participants aged 51–90 across 27 European countries. They found that people who speak only one language are twice as likely to experience accelerated aging compared to multilingual individuals.

Researchers suggest that the protective effect might arise from the constant ongoing mental effort required to manage more than one language. The findings of this study are published in Nature Aging.

/* */