Toggle light / dark theme

Understanding quantum computing’s most troubling problem—the barren plateau

For the past six years, Los Alamos National Laboratory has led the world in trying to understand one of the most frustrating barriers that faces variational quantum computing: the barren plateau.

“Imagine a landscape of peaks and valleys,” said Marco Cerezo, the Los Alamos team’s lead scientist. “When optimizing a variational, or parameterized, , one needs to tune a series of knobs that control the solution quality and move you in the landscape. Here, a peak represents a bad solution and a valley represents a good solution. But when researchers develop algorithms, they sometimes find their model has stalled and can neither climb nor descend. It’s stuck in this space we call a barren .”

For these quantum computing methods, barren plateaus can be mathematical dead ends, preventing their implementation in large-scale realistic problems. Scientists have spent a lot of time and resources developing quantum algorithms only to find that they sometimes inexplicably stall. Understanding when and why barren plateaus arise has been a problem that has taken the community years to solve.

Training robots without robots: Smart glasses capture first-person task demos

Over the past few decades, robots have gradually started making their way into various real-world settings, including some malls, airports and hospitals, as well as a few offices and households.

For robots to be deployed on a larger scale, serving as reliable everyday assistants, they should be able to complete a wide range of common manual tasks and chores, such as cleaning, washing the dishes, cooking and doing the laundry.

Training machine learning algorithms that allow robots to successfully complete these tasks can be challenging, as it often requires extensive annotated data and/or demonstration videos showing humans the tasks. Devising more effective methods to collect data to train robotics algorithms could thus be highly advantageous, as it could help to further broaden the capabilities of robots.

Animation technique simulates the motion of squishy objects

The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.

To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.

Quantum mechanics provide truly random numbers on demand

Randomness is incredibly useful. People often draw straws, throw dice or flip coins to make fair choices. Random numbers can enable auditors to make completely unbiased selections. Randomness is also key in security; if a password or code is an unguessable string of numbers, it’s harder to crack. Many of our cryptographic systems today use random number generators to produce secure keys.

But how do you know that a random number is truly random?

Classical computer algorithms can only create pseudorandom numbers, and someone with enough knowledge of the algorithm or the system could manipulate it or predict the next number. An expert in sleight of hand could rig a coin flip to guarantee a heads or tails result. Even the most careful coin flips can have bias; with enough study, their outcomes could be predicted.

‘Optical neural engine’ can solve partial differential equations

Partial differential equations (PDEs) are a class of mathematical problems that represent the interplay of multiple variables, and therefore have predictive power when it comes to complex physical systems. Solving these equations is a perpetual challenge, however, and current computational techniques for doing so are time-consuming and expensive.

Now, research from the University of Utah’s John and Marcia Price College of Engineering is showing a way to speed up this process: encoding those equations in light and feeding them into their newly designed “optical neural engine,” or ONE.

The researchers’ ONE combines diffractive optical neural networks and optical matrix multipliers. Rather than representing PDEs digitally, the researchers represented them optically, with variables represented by the various properties of a light wave, such as its intensity and phase. As a wave passes through the ONE’s series of optical components, those properties gradually shift and change, until they ultimately represent the solution to the given PDE.

AI-enabled control system helps autonomous drones stay on target in uncertain environments

An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.

To help such a stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.

The study is published on the arXiv preprint server.

Researchers map connections between the brain’s structure and function

Using an algorithm they call the Krakencoder, researchers at Weill Cornell Medicine are a step closer to unraveling how the brain’s wiring supports the way we think and act. The study, published June 5 in Nature Methods, used imaging data from the Human Connectome Project to align neural activity with its underlying circuitry.

Mapping how the brain’s anatomical connections and activity patterns relate to behavior is crucial not only for understanding how the brain works generally but also for identifying biomarkers of disease, predicting outcomes in neurological disorders and designing personalized interventions.

The brain consists of a complex network of interconnected neurons whose collective activity drives our behavior. The structural connectome represents the physical wiring of the brain, the map of how different regions are anatomically connected.

IBM’s Starling quantum computer: 20,000X faster than today’s quantum computers

IBM has just unveiled its boldest quantum computing roadmap yet: Starling, the first large-scale, fault-tolerant quantum computer—coming in 2029. Capable of running 20,000X more operations than today’s quantum machines, Starling could unlock breakthroughs in chemistry, materials science, and optimization.

According to IBM, this is not just a pie-in-the-sky roadmap: they actually have the ability to make Starling happen.

In this exclusive conversation, I speak with Jerry Chow, IBM Fellow and Director of Quantum Systems, about the engineering breakthroughs that are making this possible… especially a radically more efficient error correction code and new multi-layered qubit architectures.

We cover:
- The shift from millions of physical qubits to manageable logical qubits.
- Why IBM is using quantum low-density parity check (qLDPC) codes.
- How modular quantum systems (like Kookaburra and Cockatoo) will scale the technology.
- Real-world quantum-classical hybrid applications already happening today.
- Why now is the time for developers to start building quantum-native algorithms.

00:00 Introduction to the Future of Computing.
01:04 IBM’s Jerry Chow.
01:49 Quantum Supremacy.
02:47 IBM’s Quantum Roadmap.
04:03 Technological Innovations in Quantum Computing.
05:59 Challenges and Solutions in Quantum Computing.
09:40 Quantum Processor Development.
14:04 Quantum Computing Applications and Future Prospects.
20:41 Personal Journey in Quantum Computing.
24:03 Conclusion and Final Thoughts.

Out of the string theory swampland: New models may resolve problem that conflicts with dark energy

String theory has long been touted as physicists’ best candidate for describing the fundamental nature of the universe, with elementary particles and forces described as vibrations of tiny threads of energy. But in the early 21st century, it was realized that most of the versions of reality described by string theory’s equations cannot match up with observations of our own universe.

In particular, conventional ’s predictions are incompatible with the observation of dark energy, which appears to be causing our universe’s expansion to speed up, and with viable theories of quantum gravity, instead predicting a vast ‘swampland’ of impossible universes.

Now, a new analysis by FQxI physicist Eduardo Guendelman, of Ben-Gurion University of the Negev, in Israel, shows that an exotic subset of string models—in which the of strings is generated dynamically—could provide an escape route out of the string theory swampland.

/* */