Toggle light / dark theme

Quantum-informed machine learning for predicting spatiotemporal chaos with practical quantum advantage

Ultimately, QIML proves that we don’t need a fully fault-tolerant quantum computer to see results. By using quantum processors to learn the complex “rules” of chaos, we can give classical computers the boost they need to make reliable, long-term predictions about the most turbulent environments in the natural world.


Modeling high-dimensional dynamical systems remains one of the most persistent challenges in computational science. Partial differential equations (PDEs) provide the mathematical backbone for describing a wide range of nonlinear, spatiotemporal processes across scientific and engineering domains (13). However, high-dimensional systems are notoriously sensitive to initial conditions and the floating-point numbers used to compute them (47), making it highly challenging to extract stable, predictive models from data. Modern machine learning (ML) techniques often struggle in this regime: While they may fit short-term trajectories, they fail to learn the invariant statistical properties that govern long-term system behavior. These challenges are compounded in high-dimensional settings, where data are highly nonlinear and contain complex multiscale spatiotemporal correlations.

ML has seen transformative success in domains such as large language models (8, 9), computer vision (10, 11), and weather forecasting (1215), and it is increasingly being adopted in scientific disciplines under the umbrella of scientific ML (16). In fluid mechanics, in particular, ML has been used to model complex flow phenomena, including wall modeling (17, 18), subgrid-scale turbulence (19, 20), and direct flow field generation (21, 22). Physics-informed neural networks (23, 24) attempt to inject domain knowledge into the learning process, yet even these models struggle with the long-term stability and generalization issues that high-dimensional dynamical systems demand. To address this, generative models such as generative adversarial networks (25) and operator-learning architectures such as DeepONet (26) and Fourier neural operators (FNO) (27) have been proposed. While neural operators offer discretization invariance and strong representational power for PDE-based systems, they still suffer from error accumulation and prediction divergence over long horizons, particularly in turbulent and other chaotic regimes (28, 29). Recent work, such as DySLIM (30), enhances stability by leveraging invariant statistical measures. However, these methods depend on estimating such measures from trajectory samples, which can be computationally intensive and inaccurate in all forms of chaotic systems, especially in high-dimensional cases. These limitations have prompted exploration into alternative computational paradigms. Quantum machine learning (QML) has emerged as a possible candidate due to its ability to represent and manipulate high-dimensional probability distributions in Hilbert space (31). Quantum circuits can exploit entanglement and interference to express rich, nonlocal statistical dependencies using fewer parameters than their promising counterparts, which makes them well suited for capturing invariant measures in high-dimensional dynamical systems, where long-range correlations and multimodal distributions frequently arise (32). QML and quantum-inspired ML have already demonstrated potential in fields such as quantum chemistry (33, 34), combinatorial optimization (35, 36), and generative modeling (37, 38). However, the field is constrained on two fronts: Fully quantum approaches are limited by noisy intermediate-scale quantum (NISQ) hardware noise and scalability (39), while quantum-inspired algorithms, being classical simulations, cannot natively leverage crucial quantum effects such as entanglement to efficiently represent the complex, nonlocal correlations found in such systems. These challenges limit the standalone utility of QML in scientific applications today. Instead, hybrid quantum-classical models provide a promising compromise, where quantum submodules work together with classical learning pipelines to improve expressivity, data efficiency, and physical fidelity. In quantum chemistry, this hybrid paradigm has proven feasible, notably through quantum mechanical/molecular mechanical coupling (40, 41), where classical force fields are augmented with quantum corrections. Within such frameworks, techniques such as quantum-selected configuration interaction (42) have been used to enhance accuracy while keeping the quantum resource requirements tractable. In the broader landscape of quantum computational fluid dynamics, progress has been made toward developing full quantum solvers for nonlinear PDEs. Recent works by Liu et al. (43) and Sanavio et al. (44, 45) have successfully applied Carleman linearization to the lattice Boltzmann equation, offering a promising pathway for simulating fluid flows at moderate Reynolds numbers. These approaches, typically using algorithms such as Harrow-Hassidim-Lloyd (HHL) (46), promise exponential speedups but generally necessitate deep circuits and fault-tolerant hardware.

Quantum-enhanced machine learning (QEML) combines the representational richness of quantum models with the scalability of classical learning. By leveraging uniquely quantum properties such as superposition and entanglement, QEML can explore richer feature spaces and capture complex correlations that are challenging for purely classical models. Recent successes in quantum-enhanced drug discovery (37), where hybrid quantum-classical generative models have produced experimentally validated candidates rivaling state-of-the-art classical methods, demonstrate the practical potential of QEML even before full quantum advantage is achieved. Despite these strengths, practical barriers remain. QEML pipelines require repeated quantum-classical communication during training and rely on costly quantum data-embedding and measurement steps, which slow computation and limit accessibility across research institutions.

New study bridges the worlds of classical and quantum physics

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

Researchers use statistics and math to understand how the brain works

Nothing rivals the human brain’s complexity. Its 86 billion neurons and 85 billion other cells make an estimated 100 trillion connections. If the brain were a computer, it would perform an exaflop (a billion-billion) mathematical calculations every second and use the equivalent of only 20 watts of power. As impressive as the brain is, neurologists can’t fully explain how neurons work together.

To help find answers, researchers at the Institute for Neuroscience, Neurotechnology, and Society (INNS) at Georgia Tech are using math, data, and AI to unlock the secrets of thought. Together they are helping turn the brain’s raw electrical “noise” into real insights about how people think, move, and perceive the world.

Fair warning: Prepare your neurons for the complexity of this brain research ahead.

Large brain mapping dataset expands with new set of cognitive tasks

The Individual Brain Charting (IBC) project has released its fifth and largest update of high-resolution fMRI data, adding a new set of cognitive tasks to one of the most detailed brain-mapping datasets available today. The dataset, which is openly accessible through EBRAINS, is described in a new publication in Nature Scientific Data.

The new release expands the dataset with 18 tasks collected from 11 participants under tightly controlled, standardised conditions – bringing many of them close to 40 hours of scanned data each.

The IBC project launched in 2014 and was funded by the Human Brain Project. It aims to map how individual brains respond across a wide range of cognitive functions. By repeatedly scanning the same participants with diverse tasks – from mathematics and spatial navigation to emotion recognition, reward processing, and working memory – the team is building an exceptionally rich resource for studying individual variability in brain organization.

Classical physics can explain quantum weirdness, study shows

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

Our Universe Might Be a Giant Brain, According to New Theories

There’s something quietly unsettling about placing a photograph of a human neuron next to a simulated image of the large-scale cosmic web. The two look almost identical: delicate, branching filaments connecting dense clusters, with vast open spaces in between. One fits inside your skull. The other stretches across billions of light-years. The resemblance is hard to dismiss, and for a growing number of researchers, it’s far more than a visual coincidence.

What started as a striking observation in cosmology and neuroscience has evolved into a serious theoretical question. Could the universe, at its most fundamental level, operate the way a brain does? The ideas being put forward aren’t purely philosophical. Some of them come with testable mathematics, published peer-reviewed papers, and the names of well-regarded physicists attached. What follows is an honest look at where the science currently stands.

The estimated 200 billion detectable galaxies aren’t distributed randomly, but are lumped together by gravity into clusters that form even larger clusters, which are connected to one another by “galactic filaments,” long thin threads of galaxies. This vast architecture is what scientists call the cosmic web. When you zoom far enough out, the structure of the entire observable universe begins to take on a shape that looks startlingly familiar.

Quantum gas resists heating under periodic kicks, revealing many-body localization mechanism

A joint theoretical study by the University of Innsbruck and Zhejiang University has uncovered the microscopic origin of a striking quantum phenomenon: a periodically driven gas of ultracold atoms that simply refuses to heat up, defying classical expectations.

Push a swing repeatedly in rhythm, and it swings higher and higher, absorbing more and more energy. A quantum gas, however, can behave very differently. Under periodic kicks, quantum interference can freeze energy absorption entirely, a phenomenon known as dynamical localization. Whether this survives when particles interact with each other has been a long-standing open question. A 2025 experiment by the research group of Hanns-Christoph Nägerl at the Department of Experimental Physics confirmed that it can. But the microscopic reasons remained until now unclear.

A new theoretical study by Prof. Lei Ying’s team at Zhejiang University, in collaboration with Prof. Hanns-Christoph Nägerl’s group at the University of Innsbruck, published in Physical Review Letters, provides the missing explanation. The team developed a mathematical framework that transforms the complex-driven many-body problem into a tractable lattice model. This reveals that interactions introduce a universal power-law structure that reshapes localization—and ultimately drives its breakdown at intermediate interaction strengths.

/* */