Toggle light / dark theme

Facebook Admits the Social Network Isn’t Social

Facebook admitted something that should have been front-page news.

In an FTC antitrust filing, Meta revealed that only 7% of time on Instagram and 17% on Facebook is spent actually socializing with friends and family.

The rest?

Algorithmically selected content. Short-form video. Engagement optimized by AI.

This wasn’t a philosophical confession. It was a legal one. But it quietly confirms what many of us have felt for years:

What we still call “social networks” are not social.

They are attention machines.

AI Now Has a Primitive Form of Metacognition

In this video I break down recent research exploring metacognition in large language model ensembles and the growing shift toward System 1 / System 2 style AI architectures.
Some researchers are no longer focusing on making single models bigger. Instead, they are building systems where multiple models interact, critique each other, and dynamically switch between fast heuristic reasoning and slower deliberate reasoning. In other words: AI systems that monitor and regulate their own thinking.

Artificial metacognition: Giving an AI the ability to ‘think’ about its ‘thinking’
https://theconversation.com/artificia… System 1 to System 2: A Survey of Reasoning Large Language Models https://arxiv.org/abs/2502.17419 The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity https://dl.acm.org/doi/10.1145/374625… Emotions? Towards Quantifying Metacognition and Generalizing the Teacher-Student Model Using Ensembles of LLMs https://arxiv.org/abs/2502.17419 Metacognition https://research.sethi.org/metacognit… Robot passes the mirror test by inner speech https://www.sciencedirect.com/science… METIS: Metacognitive Evaluation for Intelligent Systems https://research.sethi.org/metacognit… Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory? Get access Arrow https://academic.oup.com/book/6923/ch… #science #explained #news #research #sciencenews #ai #robots #artificialintelligence.

From System 1 to System 2: A Survey of Reasoning Large Language Models.
https://arxiv.org/abs/2502.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.
https://dl.acm.org/doi/10.1145/374625

Emotions? Towards Quantifying Metacognition and Generalizing the Teacher-Student Model Using Ensembles of LLMs.
https://arxiv.org/abs/2502.

A protein ‘tape recorder’ enables scientists to measure and decode cellular processes at scale and over time

Unraveling the mysteries of how biological organisms function begins with understanding the molecular interactions within and across large cell populations. A revolutionary new tool, developed at the University of Michigan, acts as a sort of tape recorder produced and maintained by the cell itself, enabling scientists to rewind back in time and view interactions on a large scale and over long periods of time.

Developed in the lab of Changyang Linghu, Ph.D., Assistant Professor of Cell and Developmental Biology and Biomedical Engineering and Principal Investigator in Michigan Neuroscience Institute, the so-called CytoTape is a flexible, thread-like intracellular protein fiber, designed with the help of AI to act as a tape recorder for large-scale measurement of cellular activities.

The research appears in the journal Nature.

Mapping cell development with mathematics-informed machine learning

The development of humans and other animals unfolds gradually over time, with cells taking on specific roles and functions via a process called cell fate determination. The fate of individual cells, or in other words, what type of cells they will become, is influenced both by predictable biological signals and random physiological fluctuations.

Over the past decades, medical researchers and neuroscientists have been able to study these processes in greater depth, using a technique known as single-cell RNA sequencing (scRNA-seq). This is an experimental tool that can be used to measure the gene activity of individual cells.

To better understand how cells develop over time, researchers also rely on mathematical models. One of these models, dubbed the drift-diffusion equation, describes the evolution of systems as the combination of predictable changes (i.e., drift) and randomness (i.e., diffusion).

Software allows scientists to simulate nanodevices on a supercomputer

From computers to smartphones, from smart appliances to the internet itself, the technology we use every day only exists thanks to decades of improvements in the semiconductor industry, that have allowed engineers to keep miniaturizing transistors and fitting more and more of them onto integrated circuits, or microchips. It’s the famous Moore’s scaling law, the observation—rather than an actual law—that the number of transistors on an integrated circuit tends to double roughly every two years.

The current growth of artificial intelligence, robotics and cloud computing calls for more powerful chips made with even smaller transistors, which at this point means creating components that are only a few nanometers (or millionths of millimeters) in size. At that scale, classical physics is no longer enough to predict how the device will function, because, among other effects, electrons get so close to each other that quantum interactions between them can hugely affect the performance of the device.

AI makes quantum field theories computable

An old puzzle in particle physics has been solved: How can quantum field theories be best formulated on a lattice to optimally simulate them on a computer? The answer comes from AI.

Quantum field theories are the foundation of modern physics. They tell us how particles behave and how their interactions can be described. However, many complicated questions in particle physics cannot be answered simply with pen and paper, but only through extremely complex quantum field theory computer simulations.

This presents exceptionally complex problems: Quantum field theories can be formulated in different ways on a computer. In principle, all of them yield the same physical predictions—but in radically different ways. Some variants are computationally completely unusable, inaccurate, or inefficient, while others are surprisingly practical. For decades, researchers have been searching for the optimal way to embed quantum theories in computer simulations. Now, a team from TU Wien, together with teams from the U.S. and Switzerland, has shown that artificial intelligence can bring about tremendous progress in this area. Their paper is published in Physical Review Letters.

AI sheds light on mysterious dinosaur footprints

A new app, powered by artificial intelligence (AI), could help scientists and the public identify dinosaur footprints made millions of years ago, a study reveals.

For decades, paleontologists have pondered over a number of ancient dinosaur tracks and asked themselves if they were left by fierce carnivores, gentle plant-eaters or even early species of birds?

Now, researchers and dinosaur enthusiasts alike can upload an image or sketch of a dinosaur footprint from their mobile phone to the DinoTracker app and receive an instant analysis.

/* */