Toggle light / dark theme

EPFL researchers have discovered key “units” in large AI models that seem to be important for language, mirroring the brain’s language system. When these specific units were turned off, the models got much worse at language tasks.

Large language models (LLMs) are not just good at understanding and using language, they can also reason or think logically, solve problems and some can even predict the thoughts, beliefs or emotions of people they interact with.

Despite these impressive feats, we still don’t fully understand how LLMs work “under the hood,” particularly when it comes to how different units or modules perform different tasks. So, researchers in the NeuroAI Laboratory, part of both the School of Computer and Communication Sciences (IC) and the School of Life Sciences (SV), and the Natural Language Processing Laboratory (IC), wanted to find out whether LLMs have specialized units or modules that do specific jobs. This is inspired by networks that have been discovered in , such as the Language Network, Multiple Demand Network and Theory of Mind network.

In the domain of artificial intelligence, human ingenuity has birthed entities capable of feats once relegated to science fiction. Yet within this triumph of creation resides a profound paradox: we have designed systems whose inner workings often elude our understanding. Like medieval alchemists who could transform substances without grasping the underlying chemistry, we stand before our algorithmic progeny with a similar mixture of wonder and bewilderment. This is the essence of the “black box” problem in AI — a philosophical and technical conundrum that cuts to the heart of our relationship with the machines we’ve created.

The term “black box” originates from systems theory, where it describes a device or system analyzed solely in terms of its inputs and outputs, with no knowledge of its internal workings. When applied to artificial intelligence, particularly to modern deep learning systems, the metaphor becomes startlingly apt. We feed these systems data, they produce results, but the transformative processes occurring between remain largely opaque. As Pedro Domingos (2015) eloquently states in his seminal work The Master Algorithm: “Machine learning is like farming. The machine learning expert is like a farmer who plants the seeds (the algorithm and the data), harvests the crop (the classifier), and sells it to consumers, without necessarily understanding the biological mechanisms of growth” (p. 78).

This agricultural metaphor points to a radical reconceptualization in how we create computational systems. Traditionally, software engineering has followed a constructivist approach — architects design systems by explicitly coding rules and behaviors. Yet modern AI systems, particularly neural networks, operate differently. Rather than being built piece by piece with predetermined functions, they develop their capabilities through exposure to data and feedback mechanisms. This observation led AI researcher Andrej Karpathy (2017) to assert that “neural networks are not ‘programmed’ in the traditional sense, but grown, trained, and evolved.”

Michael Levin is a scientist at Tufts University; his lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems. He works at the intersection of developmental biology, artificial life, bioengineering, synthetic morphology, and cognitive science. Respective papers are linked below.

Round 1 Interview | What are Cognitive Light Cones? • What are Cognitive Light Cones? (Mich…
Round 2 Interview | Agency, Attractors, & Observer-Dependent Computation in Biology & Beyond • Agency, Attractors, & Observer-Depend…

Bioelectric Networks: The cognitive glue enabling evolutionary scaling from physiology to mind https://link.springer.com/article/10
Darwin’s Agential Materials: Evolutionary implications of multiscale competency in developmental biology https://link.springer.com/article/10
Biology, Buddhism, and AI: Care as the Driver of Intelligence https://www.mdpi.com/1099-4300/24/5/710

Bioelectric Networks as \.

Engineers at RMIT University have invented a small “neuromorphic” device that detects hand movement, stores memories and processes information like a human brain, without the need for an external computer.

The findings are published in the journal Advanced Materials Technologies.

Team leader Professor Sumeet Walia said the innovation marked a step toward enabling instant visual processing in autonomous vehicles, advanced robotics and other next-generation applications for improved .

We discuss Michael Levin’s paper “Self-Improvising Memory: A Perspective on Memories as Agential, Dynamically Reinterpreting Cognitive Glue.” Levin is a scientist at Tufts University, his lab studies anatomical and behavioral decision-making across biological, artificial, and hybrid systems. His work spans developmental biology, artificial life, bioengineering, synthetic morphology, and cognitive science. 🎥 Next, watch my first interview with Michael Levin… What are Cognitive Light Cones? • What are Cognitive Light Cones? (Mich… ❶ Memories as Agents 0:00 Introduction 1:40 2024 Highlights from Levin Lab 3:20 Stress sharing paper summary 6:15 Paradox of change: Species persist don’t evolve 7:20 Bow-tie architectures 10:00 🔥 Memories as messages from your past self 12:50 Polycomputing 16:45 Confabulation 17:55 What evidence supports the idea that memories are agential? 22:00 Thought experiment: Entities from earth’s core ❷ Information Patterns 31:30 Memory is not a filing cabinet 32:30 Are information patterns agential? 35:00 🔥 Caterpillar/butterfly… sea slug memory transfer 37:40 Bow-tie architectures are EVERYWHERE 43:20 Bottlenecks “scary” for information ❸ Connections & Implications 45:30 🔥 Black holes/white holes as bow-ties (Lee Smolin) 47:20 What is confabulation? AI hallucinations 52:30 Gregg Henriques & self-justifying apes… all good agents storytellers 54:20 Information telling stories… Joseph Campbell’s journey for a single cell 1:00:50 What comes next? 🚾 Works Cited 🚩 Self-Improvising Memory: A Perspective on Memories as Agential, Dynamically Reinterpreting Cognitive Glue https://www.mdpi.com/1099-4300/26/6/481 https://thoughtforms.life/suti-the-se… our way to health with robot cells | Michael Levin (Big Think 2023) • Biohacking our way to health with rob… https://peregrinecr.com/ 🚀 What is this channel? Exploring Truth in philosophy, science, & art. We’ll uncover concepts from psychology, mythology, spirituality, literature, media, and more. If you like Lex Fridman or Curt Jaimungal, you’ll love this educational channel. p.s. Please subscribe! Young channel here. =) #science #memory #biology #computing #mind #intelligence #attractor #polycomputing #bioelectric #cybernetics #research #life

At the Artificiality Summit 2024, Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard’s Wyss Institute, gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He sketched out a space of possibilities—freedom of embodiment—which facilitates imagining a hopeful future of \.