Toggle light / dark theme

Physicists challenge a 200-year-old law of thermodynamics at the atomic scale

A long-standing law of thermodynamics turns out to have a loophole at the smallest scales. Researchers have shown that quantum engines made of correlated particles can exceed the traditional efficiency limit set by Carnot nearly 200 years ago. By tapping into quantum correlations, these engines can produce extra work beyond what heat alone allows. This could reshape how scientists design future nanoscale machines.

Two physicists at the University of Stuttgart have demonstrated that the Carnot principle, a foundational rule of thermodynamics, does not fully apply at the atomic scale when particles are physically linked (so-called correlated objects). Their findings suggest that this long-standing limit on efficiency breaks down for tiny systems governed by quantum effects. The work could help accelerate progress toward extremely small and energy-efficient quantum motors. The team published its mathematical proof in the journal Science Advances.

Traditional heat engines, such as internal combustion engines and steam turbines, operate by turning thermal energy into mechanical motion, or simply converting heat into movement. Over the past several years, advances in quantum mechanics have allowed researchers to shrink heat engines to microscopic dimensions.

AI Discovers Geophysical Turbulence Model

One of the biggest challenges in climate science and weather forecasting is predicting the effects of turbulence at spatial scales smaller than the resolution of atmospheric and oceanic models. Simplified sets of equations known as closure models can predict the statistics of this “subgrid” turbulence, but existing closure models are prone to dynamic instabilities or fail to account for rare, high-energy events. Now Karan Jakhar at the University of Chicago and his colleagues have applied an artificial-intelligence (AI) tool to data generated by numerical simulations to uncover an improved closure model [1]. The finding, which the researchers subsequently verified with a mathematical derivation, offers insights into the multiscale dynamics of atmospheric and oceanic turbulence. It also illustrates that AI-generated prediction models need not be “black boxes,” but can be transparent and understandable.

The team trained their AI—a so-called equation-discovery tool—on “ground-truth” data that they generated by performing computationally costly, high-resolution numerical simulations of several 2D turbulent flows. The AI selected the smallest number of mathematical functions (from a library of 930 possibilities) that, in combination, could reproduce the statistical properties of the dataset. Previously, researchers have used this approach to reproduce only the spatial structure of small-scale turbulent flows. The tool used by Jakhar and collaborators filtered for functions that correctly represented not only the structure but also energy transfer between spatial scales.

They tested the performance of the resulting closure model by applying it to a computationally practical, low-resolution version of the dataset. The model accurately captured the detailed flow structures and energy transfers that appeared in the high-resolution ground-truth data. It also predicted statistically rare conditions corresponding to extreme-weather events, which have challenged previous models.

Michael Levin: Novel Embodiments of Mind: Natural, Bioengineered, and Hybrid Interfaces

This is an invited talk in BAMΞ’s Mathematical Phenomenology Sprint.
Cf. https://bamxi.org/research-activities/mathematical-phenomenology-sprint/

Organizing Institutions:
Bamberg Mathematical Consciousness Science Initiative (BAMΞ) https://bamxi.org.
& Association for Mathematical Consciousness Science (AMCS) https://amcs-community.org

Seeing the Quantum Butterfly Effect

A combined experimental and theoretical study reveals the emergence of quantum chaos in a complex system, suggesting that it can be described with a universal theoretical framework.

Consider the following thought experiment: Take all the air molecules in a thunderstorm and evolve them backward in time for an hour, effectively rewinding a molecular movie. Then slightly perturb the velocity directions of a few molecules and evolve the system forward again to the current moment. Because such systems are chaotic, microscopic perturbations in the past will lead to dramatically different futures. This “butterfly effect” also occurs in quantum systems. To observe it, researchers measure a mathematical entity called the out-of-time-ordered correlator (OTOC). Loosely speaking, the OTOC measures how quickly a system “forgets” its initial state. Unfortunately, the OTOC is notoriously difficult to measure because it typically requires experimental protocols that implement an effective many-body time reversal.

Leading AI models struggle to solve original math problems

Mathematics, like many other scientific endeavors, is increasingly using artificial intelligence. Of course, math is the backbone of AI, but mathematicians are also turning to these tools for tasks like literature searches and checking manuscripts for errors. But how well can AI perform when it comes to solving genuine, high-level research problems?

To date, there is still no widely accepted realistic methodology for assessing AI’s capabilities to solve math at this level. So a group of mathematicians decided to put the machines to the test as they detail in a study available on the arXiv preprint server.

Previous attempts at testing AI have used math contest problems and questions already found in textbooks. What makes this study different is that the questions the programs faced were drawn from mathematicians’ own research. They had never been posted or published online, which means AI couldn’t memorize answers from its training data.

Seeing the whole from a part: Revealing hidden turbulent structures from limited observations and equations

The irregular, swirling motion of fluids we call turbulence can be found everywhere, from stirring in a teacup to currents in the planetary atmosphere. This phenomenon is governed by the Navier-Stokes equations—a set of mathematical equations that describe how fluids move.

Despite being known for nearly two centuries, these equations still pose major challenges when it comes to making predictions. Turbulent flows are inherently chaotic, and tiny uncertainties can grow quickly over time.

In real-world situations, scientists can only observe part of a turbulent flow, usually its largest and slowest moving features. Thus, a long-standing question in fluid physics has been whether these partial observations are enough to reconstruct the full motion of the fluid.

Mathematics for Computer Science

This course covers elementary discrete mathematics for computer science and engineering. It emphasizes mathematical definitions and proofs as well as applicable methods. Topics include formal logic notation, proof methods; induction, well-ordering; sets, relations; elementary graph theory; integer congruences; asymptotic notation and growth of functions; permutations and combinations, counting principles; discrete probability. Further selected topics may also be covered, such as recursive definition and structural induction; state machines and invariants; recurrences; generating functions.

View a PDF of the paper titled When Models Manipulate Manifolds: The Geometry of a Counting Task, by Wes Gurnee and 6 other authors

When you look at text, you subconsciously track how much space remains on each line. If you’re writing “Happy Birthday” and “Birthday” won’t fit, your brain automatically moves it to the next line. You don’t calculate this—you *see* it. But AI models don’t have eyes. They receive only sequences of numbers (tokens) and must somehow develop a sense of visual space from scratch.

Inside your brain, “place cells” help you navigate physical space by firing when you’re in specific locations. Remarkably, Claude develops something strikingly similar. The researchers found that the model represents character counts using low-dimensional curved manifolds—mathematical shapes that are discretized by sparse feature families, much like how biological place cells divide space into discrete firing zones.

The researchers validated their findings through causal interventions—essentially “knocking out” specific neurons to see if the model’s counting ability broke in predictable ways. They even discovered visual illusions—carefully crafted character sequences that trick the model’s counting mechanism, much like optical illusions fool human vision.

2. Attention mechanisms are geometric engines: The “attention heads” that power modern AI don’t just connect related words—they perform sophisticated geometric transformations on internal representations.

1. What other “sensory” capabilities have models developed implicitly? Can AI develop senses we don’t have names for?


Language models can perceive visual properties of text despite receiving only sequences of tokens-we mechanistically investigate how Claude 3.5 Haiku accomplishes one such task: linebreaking in fixed-width text. We find that character counts are represented on low-dimensional curved manifolds discretized by sparse feature families, analogous to biological place cells. Accurate predictions emerge from a sequence of geometric transformations: token lengths are accumulated into character count manifolds, attention heads twist these manifolds to estimate distance to the line boundary, and the decision to break the line is enabled by arranging estimates orthogonally to create a linear decision boundary. We validate our findings through causal interventions and discover visual illusions—character sequences that hijack the counting mechanism.

/* */