Toggle light / dark theme

Neurons receive precisely tailored teaching signals as we learn

How does the brain know which neurons to adjust during learning in order to optimize behavior? MIT researchers discovered that brains can use cell-by-cell error signals to do this — surprisingly similar to how AI systems are trained via backpropagation.


When we learn a new skill, the brain has to decide—cell by cell—what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.

The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A longstanding question has been whether the brain also uses that kind of individualized feedback. In a study published in the February 25 issue of the journal Nature, MIT researchers report evidence that it does.

A research team led by Mark Harnett, a McGovern Institute investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.

Better reporting is better science: Community-defined minimal reporting requirements for light microscopy

Accessible minimal requirements for reproducible light microscopy. This viewpoint from Paula Montero Llopis, Chloë van Oostende-Triplet, the QUAREP-LiMi consortium and colleagues presents a community-endorsed checklist defining minimal light microscopy metadata to improve rigor, reproducibility, and transparency in research.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

An agentic system for rare disease diagnosis with traceable reasoning

DeepRare—a multi-agent system for rare disease differential diagnosis decision support powered by large language models, integrating specialized tools and up-to-date knowledge sources—has the potential to reduce healthcare disparities in rare disease diagnosis.

5 Fermi Paradox Explanations I Love, 7 That Fall Flat

The Fermi Paradox asks why, in a vast and ancient universe, we see no signs of alien life. In this episode, we explore five explanations that make sense—and seven popular ones that, despite sounding good, fall flat under closer scrutiny.

Join this channel to get access to perks:
/ @isaacarthursfia.
Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: / isaacarthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-a… Group: / 1,583,992,725,237,264 Reddit: / isaacarthur Twitter: / isaac_a_arthur on Twitter and RT our future content. SFIA Discord Server: / discord Credits: 5 Fermi Paradox Explanations I Love, 7 That Fall Flat Episode 502 / 721; May 31, 2025 Written, Produced & Narrated by: Isaac Arthur Graphics: Ken York YD Visual Select imagery/video supplied by Getty Images Music Courtesy of Epidemic Sound http://epidemicsound.com/creator 0:00 Intro 1:11 Space Is Too Big 4:46 Rare Earth 6:35 Aliens Signals Just Can’t Be Heard 10:09 Humans Are Boring 11:36 Dark Forest Theory 14:14 Interdiction Bubble Civilizations 16:49 Hermit Hypothesis 19:18 Aestivation Hypothesis and Extragalactic Migration 21:10 Transcendance, Ascension, and Extra-Universe Migration 22:33 Infinite Miniaturization 23:33 Berserkers & Zombie AI 24:53 Aliens Common But Unrecognizable 26:08 Simulation Hypothesis.
Facebook Group: / 1583992725237264
Reddit: / isaacarthur.
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord.

Credits:
5 Fermi Paradox Explanations I Love, 7 That Fall Flat.
Episode 502 / 721; May 31, 2025
Written, Produced & Narrated by: Isaac Arthur.
Graphics: Ken York YD Visual.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.

0:00 Intro.
1:11 Space Is Too Big.
4:46 Rare Earth.
6:35 Aliens Signals Just Can’t Be Heard.
10:09 Humans Are Boring.
11:36 Dark Forest Theory.
14:14 Interdiction Bubble Civilizations.
16:49 Hermit Hypothesis.
19:18 Aestivation Hypothesis and Extragalactic Migration.
21:10 Transcendance, Ascension, and Extra-Universe Migration.
22:33 Infinite Miniaturization.
23:33 Berserkers & Zombie AI
24:53 Aliens Common But Unrecognizable.
26:08 Simulation Hypothesis

Is artificial general intelligence already here? A new case that today’s LLMs meet key tests

Will artificial intelligence ever be able to reason, learn, and solve problems at levels comparable to humans? Experts at the University of California San Diego believe the answer is yes—and that such artificial general intelligence has already arrived. This debate is tackled by four faculty members spanning humanities, social sciences, and data science in a recently published Comment invited by Nature.

Computer scientist Alan Turing first posed this question in his landmark 1950 paper, though he didn’t use the term artificial general intelligence (AGI). His “imitation game,” now known as the Turing Test, asked whether a machine could pass as human in text-based conversation with humans. Seventy-five years later, that future is here.

Over the past year, Associate Professor of Philosophy Eddy Keming Chen, Professor of Artificial Intelligence, Data Science and Computer Science Mikhail Belkin, Associate Professor of Linguistics and Computer Science Leon Bergen, and Professor of Data Science, Philosophy and Policy David Danks engaged in extensive dialogue on this question. These discussions happened as another set of researchers at UC San Diego found in March 2025 that the large language model GPT-4.5 was judged to be human 73% of the time in a Turing test—much more often than actual humans.

Clinically informed AI outperforms foundation models in spinal cord disease prediction

Cervical spondylotic myelopathy (CSM) refers to spinal cord compression from arthritis in the neck and is the leading cause of spinal cord dysfunction in older adults. CSM is a chronic, progressive condition that can cause neck pain, muscle weakness, difficulty walking and other debilitating symptoms. While the diagnosis is sometimes clear, often the diagnosis can take years because symptoms aren’t recognized until the later stages, and by then, treatment options are limited.

A multidisciplinary team of surgeon-scientists, computer scientists and researchers at WashU developed an artificial intelligence (AI)-based approach that could help clinicians screen for and diagnose CSM up to 30 months earlier, opening new opportunities for earlier treatment. The findings are published in npj Digital Medicine.

Salim Yakdan, MD, a postdoctoral research fellow in the Taylor Family Department of Neurosurgery at WashU Medicine, and Ben Warner, a doctoral student in computer science and engineering at the McKelvey School of Engineering, co-first authors on the research, used seven different AI models to analyze large datasets containing electronic health record data of more than 2 million people with and without CSM. The models examined patterns of health-care interactions, such as tests and diagnoses, recorded in electronic health records to spot patients whose medical histories resemble those already diagnosed with CSM, helping to flag individuals who may be at higher risk.

Viewing Neural Networks Through a Statistical-Physics Lens

Statistical physics is shedding light on how network architecture and data structure shape the effectiveness of neural-network learning.

Machine-learning technologies have profoundly reshaped many technical fields, with sweeping applications in medical diagnosis, customer service, drug discovery, and beyond. Central to this transformation are neural networks (NNs), models that learn patterns from data by combining many simple computational units, or neurons, linked by weighted connections. Acting collectively, these neurons can process data to learn complex input–output relationships. Despite their practical success, the fundamental mechanisms by which NNs learn remain poorly understood at a theoretical level. Statistical physics offers a promising framework for exploring central questions in machine-learning theory, potentially clarifying how learning depends on the layout of the network—the NN architecture—and on statistics of the data—the data structure (Fig. 1).

Three recent papers in a special Physical Review E collection (See Collection: Statistical Physics Meets Machine Learning — Machine Learning Meets Statistical Physics) provide significant insights into these questions. Francesca Mignacco of City University of New York and Princeton University and Francesco Mori of the University of Oxford in the UK derived analytical results on the optimal fraction of neurons that should be active at a given time [1]. Abdulkadir Canatar and SueYeon Chung of the Flatiron Institute in New York and New York University investigated the influence of the precision with which a network is “trained” on the amount of data the NN can reliably decode [2]. Francesco Cagnetta at the International School for Advanced Studies in Italy and colleagues showed that NNs whose structure mirrors that of the data learn faster [3].

HEART benchmark assesses ability of LLMs and humans to offer emotional support

Large language models (LLMs), artificial intelligence (AI) systems that can process human language and generate texts in response to specific user queries, are now used daily by a growing number of people worldwide. While initially these models were primarily used to quickly source information or produce texts for specific uses, some people have now also started approaching the models with personal issues or concerns.

This has given rise to various debates about the value and limitations of LLMs as tools for providing emotional support. For humans, offering emotional support in dialogue typically entails recognizing what another is feeling and adjusting their tone, words and communication style accordingly.

Researchers at Hippocratic AI, Stanford University, University of California San Diego and University of Texas at Austin recently developed a new structured method to evaluate the ability of both LLMs and humans to offer emotional support during dialogues marked by several back-and-forth exchanges. This framework, dubbed HEART, was introduced in a paper is published on the arXiv preprint server.

How AI can improve the quality of peer review

A new AI coach for scientists has been shown to significantly improve the quality of peer reviews, making them clearer and more helpful for authors. Peer review is essential to ensuring the integrity of scientific publications, but many researchers are dissatisfied with the quality of the feedback they receive. Common complaints include vague, short, and unhelpful reviews. For example, in a survey of 11,800 researchers, only 55.4% of respondents reported being satisfied with the quality of the feedback. The problem is exacerbated by the sheer volume of papers, which has left reviewers feeling overwhelmed.

But help for stressed-out reviewers may be at hand. A team of researchers has developed the Review Feedback Agent, a system that uses five large language models to scan reviews and provide private feedback to reviewers before the authors see them. They trained their AI reviewer by carefully prompting existing large language models, as they explain in a paper published in Nature Machine Intelligence.

The researchers tested their system in the paper review cycle before ICLR 2025, a leading conference in deep learning and machine learning. They randomly assigned around 20,000 reviews to receive AI feedback shortly after they were written. These automated “reviews of the reviews” were then sent back to the human reviewers as private feedback. Another 20,000 were placed in a control group that received no feedback at all.

When light ‘thinks’ like the brain: The connection between photons and artificial memory

An international study has revealed a surprising connection between quantum physics and the theoretical models underlying artificial intelligence. The study results from a collaboration between the Institute of Nanotechnology of the National Research Council (Cnr-Nanotec), the Italian Institute of Technology (IIT), and Sapienza University of Rome, together with international research institutions. The research paper was published recently in the journal Physical Review Letters.

Italian researchers show that identical photons propagating within optical circuits spontaneously behave like a Hopfield Network, one of the best-known mathematical models used to describe the associative memory mechanisms of the human brain.

“Instead of using traditional electronic chips, we exploited quantum interference —the phenomenon that occurs in photonic chips when particles of light overlap and interact with one another to encode and retrieve information,” explains Marco Leonetti, coordinator and corresponding author of the study, senior researcher at Cnr-Nanotec and affiliated with the Center for Life Nano-and Neuro-Science at the Italian Institute of Technology (IIT) in Rome. “In this system, photons are not merely carriers of data, but themselves become the ‘neurons’ of an associative memory.”

/* */