Toggle light / dark theme

Rethinking where language comes from: Framework reveals complex interplay of biology and culture

A new study challenges the idea that language stems from a single evolutionary root. Instead, it proposes that our ability to communicate evolved through the interaction of biology and culture, and involves multiple capacities, each with different evolutionary histories. The framework, published in Science, unites discoveries across disciplines to explain how the ability to learn to speak, develop grammar, and share meaning converged to create complex communication.

For centuries, philosophers and scientists have wrestled with understanding how human language came about. Language defines us as a species, yet its origins have remained a mystery. In a remarkable international collaboration, 10 experts from different disciplines present a unified framework to address this enduring puzzle, harnessing powerful new methods and insights from their respective scientific domains.

“Crucially, our goal was not to come up with our own particular explanation of language evolution,” says first author Inbal Arnon, “Instead, we wanted to show how multifaceted and biocultural perspectives, combined with newly emerging sources of data, can shed new light on old questions.”

From light to logic: First complete logic gate achieved in soft material using light alone

Researchers from McMaster University and the University of Pittsburgh have created the first functionally complete logic gate—a NAND gate (short for “NOT AND”)—in a soft material using only beams of visible light. The discovery, published in Nature Communications, marks a significant advance in the field of materials that compute, in which materials themselves process information without traditional electronic circuitry.

“This project has been part of my scientific journey for over a decade,” said first author Fariha Mahmood, who began studying the gels as an undergraduate researcher at McMaster and is now pursuing postdoctoral research at the University of Cambridge. “To see these materials not only respond to light but also perform a logic operation feels like watching the material ‘think.’ It opens the door to soft systems making decisions on their own.”

Mahmood is joined by authors Anna C. Balazs, distinguished professor of chemical and petroleum engineering, and Victor V. Yashin, research assistant professor at Pitt’s Swanson School of Engineering; and corresponding author Kalaichelvi Saravanamuttu, professor of chemistry and chemical biology at McMaster.

Scientists Overturn 20 Years of Textbook Biology With Stunning Discovery About Cell Division

Scientists have uncovered an unexpected function for a crucial protein involved in cell division. Reported in two consecutive publications, the finding challenges long-accepted models and standard descriptions found in biology textbooks. Researchers at the Ruđer Bošković Institute (RBI) in Zagreb

The Intelligence Foundation Model Could Be The Bridge To Human Level AI

Cai Borui and Zhao Yao from Deakin University (Australia) presented a concept that they believe will bridge the gap between modern chatbots and general-purpose AI. Their proposed “Intelligence Foundation Model” (IFM) shifts the focus of AI training from merely learning surface-level data patterns to mastering the universal mechanisms of intelligence itself. By utilizing a biologically inspired “State Neural Network” architecture and a “Neuron Output Prediction” learning objective, the framework is designed to mimic the collective dynamics of biological brains and internalize how information is processed over time. This approach aims to overcome the reasoning limitations of current Large Language Models, offering a scalable path toward true Artificial General Intelligence (AGI) and theoretically laying the groundwork for the future convergence of biological and digital minds.


The Intelligence Foundation Model represents a bold new proposal in the quest to build machines that can truly think. We currently live in an era dominated by Large Language Models like ChatGPT and Gemini. These systems are incredibly impressive feats of engineering that can write poetry, solve coding errors, and summarize history. However, despite their fluency, they often lack the fundamental spark of what we consider true intelligence.

They are brilliant mimics that predict statistical patterns in text but do not actually understand the world or learn from it in real-time. A new research paper suggests that to get to the next level, we need to stop modeling language and start modeling the brain itself.

Borui Cai and Yao Zhao have introduced a concept they believe will bridge the gap between today’s chatbots and Artificial General Intelligence. Published in a preprint on arXiv, their research argues that existing foundation models suffer from severe limitations because they specialize in specific domains like vision or text. While a chatbot can tell you what a bicycle is, it does not understand the physics of riding one in the way a human does.

Asimov Press (@asimovpress)

We just released a curated list of 125+ essays about biology and science. These articles cover pharmaceuticals, the history of molecular biology, timeless arguments and theories, and more. All of them inspired or taught or challenged us to think more deeply.

Check it out here on Substack or on our custom website: https://read.asimov.com

Michael Levin: Unconventional Embodiments: model systems… (ECSU OIST)

Youtube caption.

On 11th of December 2025, Michael Levin gave a talk as part of the ECogS Conference hosted by The Embodied Cognitive Science Unit at OIST.

Title: Unconventional Embodiments: model systems and strategies for addressing mind-blindness.

Abstract: One of the most salient aspects of any agent’s environment is the question of how many, what kind, and what degree of agency exists in it. It is as relevant to biological organisms as to robots in human environments. It is also critical for scientists, philosophers, and engineers, as well as for human societies which will increasingly contain modified, synthetic, and hybrid beings of every possible description. In this talk I will argue that our evolutionary history has left us with significant mind-blindness, which makes it difficult for us to recognize minds of unfamiliar scales, problem spaces, or embodiments. I will describe our lab’s work to develop conceptual tools for recognizing and communicating with diverse intelligences. I will also present recent data from our new synthetic proto-organisms, in which we test those ideas by creating and studying the behavioral properties of beings who have not been specifically selected for them. I will conclude the talk with a speculative idea about the latent space from which novel intrinsic motivations ingress into physical, biological, and computational systems.

Youth with mental health conditions share strikingly similar brain changes, regardless of diagnosis

An international study—the largest of its kind—has uncovered similar structural changes in the brains of young people diagnosed with anxiety disorders, depression, ADHD and conduct disorder, offering new insights into the biological roots of mental health conditions in children and young people.

Led by Dr. Sophie Townend, a researcher in the Department of Psychology at the University of Bath, the study, published today in the journal Biological Psychiatry, analyzed from almost 9,000 children and adolescents—around half with a diagnosed mental health condition—to identify both shared and disorder-specific alterations in brain structure across four of the most common psychiatric disorders in childhood and adolescence.

Among several key findings, the researchers identified common brain changes across all four disorders—notably, a reduced surface area in regions critical for processing emotions, responding to threats and maintaining awareness of bodily states.

Brain organoid pioneers fear inflated claims about biocomputing could backfire

For the brain organoids in Lena Smirnova’s lab at Johns Hopkins University, there comes a time in their short lives when they must graduate from the cozy bath of the bioreactor, leave the warm, salty broth behind, and be plopped onto a silicon chip laced with microelectrodes. From there, these tiny white spheres of human tissue can simultaneously send and receive electrical signals that, once decoded by a computer, will show how the cells inside them are communicating with each other as they respond to their new environments.

More and more, it looks like these miniature lab-grown brain models are able to do things that resemble the biological building blocks of learning and memory. That’s what Smirnova and her colleagues reported earlier this year. It was a step toward establishing something she and her husband and collaborator, Thomas Hartung, are calling “organoid intelligence.”

Tead More


Another would be to leverage those functions to build biocomputers — organoid-machine hybrids that do the work of the systems powering today’s AI boom, but without all the environmental carnage. The idea is to harness some fraction of the human brain’s stunning information-processing superefficiencies in place of building more water-sucking, electricity-hogging, supercomputing data centers.

Despite widespread skepticism, it’s an idea that’s started to gain some traction. Both the National Science Foundation and DARPA have invested millions of dollars in organoid-based biocomputing in recent years. And there are a handful of companies claiming to have built cell-based systems already capable of some form of intelligence. But to the scientists who first forged the field of brain organoids to study psychiatric and neurodevelopmental disorders and find new ways to treat them, this has all come as a rather unwelcome development.

At a meeting last week at the Asilomar conference center in California, researchers, ethicists, and legal experts gathered to discuss the ethical and social issues surrounding human neural organoids, which fall outside of existing regulatory structures for research on humans or animals. Much of the conversation circled around how and where the field might set limits for itself, which often came back to the question of how to tell when lab-cultured cellular constructs have started to develop sentience, consciousness, or other higher-order properties widely regarded as carrying moral weight.

/* */