Toggle light / dark theme

Cell types: encoding the brain’s BIOS

Excellent Substack writeup by Patrick Mineault on how cell types may specify innate behaviors and why mapping regions of the brain specialized to steer innate behaviors (via lots of distinct cell types) could lead us to more aligned AI systems. Highly convincing and elegant arguments made here! [ https://substack.com/home/post/p-189321289](https://substack.com/home/post/p-189321289)


Dwarkesh seemed very confused by this, asking a few different times: “Why would each reward function need a different cell type?” I empathize with Dwarkesh here! It is mysterious that a cell type could represent something as abstract as a reward. As a computational neuroscientist who mostly worked at the representation level during my PhD, I’ve leaned historically towards thinking of cell types as a mere “implementation detail”. But over conversations with Adam, Steve Byrnes, Paul Cisek, Tony Zador, and a few others, I’ve started to become convinced that cell types are a really useful lens to think about innate behaviors and rewards.

In this essay, I’ll unpack the conversation and answer the question: what do cell types have to do with reward functions? To answer it, we’ll need to understand what kind of information can be encoded in the genome, and how that information ultimately relates to connectomes and to cell types. I’ll connect the answer to the central claim of Adam: that these connections matter for AI, and AI safety in particular.

Andrew Barto and colleagues make the point that all primary rewards are internal, and must be genetically encoded. In reinforcement learning, which Barto co-developed along with Rich Sutton, an agent learns by receiving reward signals that indicate what is good and bad. The critical insight is that for biological organisms, all of these reward signals are internal —they are generated by the organism’s own nervous system. It is not a chunk of steak that gives reward: it is circuitry inside the brain that assigns positive valence to fat, salt, umami, heat, and texture. Things like money—secondary rewards—must be bootstrapped off of the pre-existing primary rewards.

Scientists put forward a new theory of brain development

Your brain begins as a single cell. When all is said and done, it will house an incredibly complex and powerful network of some 170 billion cells. How does it organize itself along the way? Cold Spring Harbor Laboratory neuroscientists have come up with a surprisingly simple answer that could have far-reaching implications for biology and artificial intelligence.

Stan Kerstjens, a postdoc in Professor Anthony Zador’s lab, frames the question in terms of positional information. “The only thing a cell ‘sees’ is itself and its neighbors,” he explains. “But its fate depends on where it sits. A cell in the wrong place becomes the wrong thing, and the brain doesn’t develop right. So, every cell must solve two questions: Where am I? And who do I need to become?”

In a study published in Neuron, Kerstjens, Zador, and colleagues at Harvard University and ETH Zürich put forward a new theory for how the brain organizes itself during development.

AI threatens to eat business software—and it could change the way we work

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce, ServiceNow and Oracle, have seen their share prices tumble.

Even if you’ve never used these companies’ software tools, there’s a good chance your employer has. These tools manage key data about customers, employees, suppliers and products, supporting everything from payroll and purchasing to customer service.

Now new “agentic” artificial intelligence (AI) tools for business are expected to reduce reliance on traditional software for everyday work. These include Anthropic’s Cowork, OpenAI’s Frontier and open-source agent platforms such as OpenClaw.

Encryption: A Key Guardian of Our Digital Future

By Chuck Brooks and Bill Bowers.


Every time you send a text, pay for groceries with your phone, or use your health site, you are relying on encryption. It’s an invisible shield that protects your data from prying eyes. Encryption is more than just a technological protection; it is the basis for digital trust.

Encryption is more than just safeguarding data; it is also about protecting people. It helps ensure privacy by protecting persons from spying and exploitation. And it is widely adopted to help ensure digital transaction security. For National Security it serves to protect key infrastructure and government communications. And it has a human rights function by providing citizens with peace of mind by ensuring the safety of their personal information. In places where surveillance is widespread, encryption can even defend free expression and opposition. It is a human right in this digital age.

In my book Inside Cyber: How AI, 5G, IoT, and Quantum Computing Will Transform Privacy and Security, I referred to encryption as the “linchpin of privacy and commerce in a connected society.” Without it, the digital economy would crumble under the strain of criminality, fraud, and spying.

Why I Quit ChatGPT and Switched to Claude

“AI will most likely lead to the end of the world, but in the meantime there will be great companies created.” — Sam Altman, OpenAI CEO

I used to think that was dark humor.

This week, I stopped laughing — and cancelled my ChatGPT subscription.

Not because of the technology. Because of the values.

On February 27, Anthropic refused to give the Pentagon unrestricted access to its AI for mass surveillance and autonomous killer weapons. Within hours, OpenAI’s Sam Altman swooped in and took the deal.

One company held the line. The other sprinted to cross it.

‘An AlphaFold 4’ — scientists marvel at DeepMind drug spin-off’s exclusive new AI

Isomorphic Labs has developed a drug-protein interaction model which surpasses the previous tech in this area. Yet the model is proprietary, so no one knows how it was designed and trained and why it works so well!


Isomorphic Lab’s proprietary drug-discovery model is a major advance, but scientists developing open-source tools are left guessing how to achieve similar results.

Bioinspired robot eye adjusts its pupil to handle harsh lighting

Robot vision could soon get a boost thanks to the development of a bioinspired eye that can automatically adjust its pupil size in response to changing light levels. Robots, self-driving cars and drones often struggle with dynamic lighting. If a car enters a dark tunnel, its camera aperture needs to stay wide open to capture enough light to see, just like our pupils do when the lights go out. But when it exits into daylight, it can be instantly blinded by the glare.

In a study published in the journal Science Robotics, researchers detail how they have created a bioinspired vision system that not only mimics the way eyes see but also adapts to light conditions. The technology is designed to bridge the gap between how a standard camera sees and how living creatures view their surroundings.

Cameras may excel at capturing high-resolution images, but in dynamic environments, they lack the flexibility to adapt.

Letting atomic simulations learn from phase diagrams

A new computational method allows modern atomic models to learn from experimental thermodynamic data, according to a University of Michigan Engineering and Université Paris-Saclay study published in Nature Communications. Leveraging a machine learning technique called score matching, the method expresses the thermodynamic free energy of atomic systems as a function of the underlying atomic interaction model, unlike standard schemes where the interaction model is fixed.

By returning thermodynamic predictions as functions rather than static numbers, the method, which is also over 10 times more efficient than previous approaches, can easily quantify and help accelerate computational materials discovery by opening up new inverse design capabilities. The method is called “descriptor density of states” and is abbreviated D-DOS.

“The D-DOS method provides a two-way connection between the latest generation of atomic simulations and the classical resource of phase diagrams, exposing these datasets to machine learning-driven computer models,” said Thomas Swinburne, an assistant professor of mechanical engineering at U-M and co-corresponding author of the study.

/* */