Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Compact flat-lens system can generate nondiffracting bottle beams

Most laser sources produce Gaussian beams that diverge as they propagate. This natural spreading limits their effectiveness in applications that require light to remain concentrated over long distances. To overcome this challenge, structured light beams have been developed, whose amplitude, phase, and polarization can be carefully controlled.

Among these are Bessel beams, which are generated by the self-interference of laser beams as they propagate through space. However, ideal Bessel beams possess complex ring structures that complicate their practical use. Additionally, existing methods for generating advanced beam shapes, such as optical bottle beams, often involve complex and expensive setups that necessitate precise alignment.

Now, researchers at Chiba University, Japan, have developed a simple and compact method to generate a laser chain beam that remains nondiffracting during free-space propagation.

Novel approach to quantum error correction portends a scalable future for quantum computing

A University of Sydney quantum physicist has developed a new approach to quantum error correction that could significantly reduce the number of physical qubits required to build large-scale, fault-tolerant quantum computers. The study, co-authored by Dr. Dominic Williamson from the School of Physics, is titled “Low-overhead fault-tolerant quantum computation by gauging logical operators” and published in Nature Physics.

The work was done while Dr. Williamson was on a sabbatical working at global technology firm IBM in California. Elements of the new design have been integrated into IBM’s plan to build large-scale quantum computing.

“We’re at a point where theory and experiment are beginning to align,” Dr. Williamson said. “The big question now is how to design quantum computers that can be scaled efficiently to solve useful problems. Our work provides a promising blueprint.”

Quantum coherence could be preserved at large scales in realistic environments

Quantum states are notoriously fragile, and can be destroyed simply through interactions, measurements, and exposure to their surrounding environments. In a new theoretical study published in Physical Review X, Rohan Mittal and colleagues at the University of Cologne have discovered a new way to protect quantum behavior on large scales within systems driven far from equilibrium. Their results could have promising implications for the design of more robust quantum devices.

When quantum many-body systems are driven out of equilibrium, they undergo decoherence, causing quantum correlations and superpositions to break down. Even when such a system is built from entirely quantum components, the effect can cause its behavior to become indistinguishable from that of a classical system on larger scales, making it unsuitable for technologies such as quantum computing or sensing.

So far, researchers have attempted to solve the decoherence problem by fine-tuning two independent parameters: one to push the system to the boundary between two distinct quantum phases, and another to ensure that quantum coherence is maintained at this boundary. In practice, however, the need to account for two parameters simultaneously has made this approach both fragile and experimentally daunting.

The secrets of black holes and the Higgs mass could be hidden in a 7-dimensional geometry

One of the greatest mysteries of modern physics, the “black hole information paradox,” might have finally found an elegant solution, and the answer could also reveal the origins of the mass of fundamental particles.

In the 1970s, Stephen Hawking demonstrated, through semi-classical calculations, that black holes are not truly black, but emit a weak radiation that causes them to gradually shrink until they disappear.

This process, however, brings with it a massive problem: it seems to cause an irreversible loss of information, violating the unitarity principle of quantum mechanics. In other words, the laws of quantum physics state that information cannot be destroyed, but the evaporation of a black hole suggests otherwise.

A tiny detector for microwave photons could advance quantum tech

Detecting a single particle of light is hard; detecting a single microwave photon is even harder. Microwave photons, the tiny packets of electromagnetic radiation used in current technologies like Wi-Fi and radar, carry far less energy than visible light. They are about 100,000 times weaker than optical photons.

Many existing quantum technologies depend on detecting individual photons with high reliability. For visible light, this is well established using devices that convert incoming light directly into electrical signals. But at microwave frequencies (0.3–30 GHz), this fails because each individual photon doesn’t carry enough energy to release an electric charge into a material. This means that detecting single microwave photons requires a completely different strategy.

A long-standing goal has been to realize a simple device capable of continuously detecting microwave photons. Now, scientists at EPFL, led by Pasquale Scarlino, have developed a semiconductor-based detector that takes an important step in that direction.

The most pristine star yet found in the known universe

An unusual team of astronomers used Sloan Digital Sky Survey-V (SDSS-V) data and observations on the Magellan telescopes at Carnegie Science’s Las Campanas Observatory in Chile to discover the most pristine star in the known universe, called SDSS J0715-7334. Their work is published in Nature Astronomy.

Led by the University of Chicago’s Alexander Ji—a former Carnegie Observatories postdoctoral fellow—and including Carnegie astrophysicist Juna Kollmeier—who leads SDSS, now in its fifth generation—the research team identified a star from just the second generation of celestial objects in the cosmos, which formed just a few billion years after the universe began.

“These pristine stars are windows into the dawn of stars and galaxies in the universe,” Ji explained. Several of his and Kollmeier’s co-authors on the paper are undergraduate students from UChicago, whom Ji brought to Las Campanas on an observing trip for spring break last year. “My first visit to LCO is where I really fell in love with astronomy, and it was special to share such a formative experience with my students.”

Small quantum system outperforms large classical networks in real-world forecasting

Can a handful of atoms outperform a much larger digital neural network on a real-world task? The answer may be yes. In a study published in Physical Review Letters, a team led by Prof. Peng Xinhua and Assoc. Prof. Li Zhaokai from the University of Science and Technology of China of the Chinese Academy of Sciences demonstrated that a quantum processor comprising just nine interacting spins outperforms classical networks with thousands of nodes in realistic weather forecasting tasks.

By exploiting unique quantum features such as superposition and entanglement, quantum devices offer new ways to represent and process information.

Recent experiments have shown their advantages in specialized benchmark tasks, but extending these gains to real-world applications remains a challenge. In particular, many quantum approaches rely on complex circuits that are difficult to implement accurately on today’s noisy hardware.

Research moves closer to ‘smart’ sensors in knee replacements

If you have a knee replacement, imagine pointing your phone at your knee and pulling up an app that tells you how much stress the artificial joint is experiencing. Knowing the activities that cause the biggest problems—which can lead to a second replacement surgery—would be invaluable. Research led by Binghamton University is closer to making this technology a reality.

Professor Shahrzad “Sherry” Towfighian—a faculty member from the Thomas J. Watson College of Engineering and Applied Science’s Department of Mechanical Engineering—has worked toward “smart-knee” tech over the past decade.

According to the American College of Rheumatology, nearly 800,000 total knee replacements are done every year in the U.S., and that number is expected to rise sharply by 2030 as the population ages and sports injuries become more common.

How the human brain builds our sense of time

How does Jannik Sinner manage to hit the ball at exactly the right moment, with remarkable precision? And how do we, in everyday life, perceive the duration of events around us? The answer lies in how the brain constructs the perception of time, as shown by research published in PLOS Biology by Valeria Centanino, Gianfranco Fortunato, and Domenica Bueti. Starting from what we see—such as an approaching ball—temporal information is processed by the brain through progressively more complex stages: from the occipital visual cortex, to parietal and premotor areas, and finally to frontal regions.

Using high-field functional magnetic resonance imaging (fMRI) and measuring time perception in healthy volunteers, the researchers shed light on what happens in the brain when we estimate the duration of a visual stimulus. “Our results show that time perception is not a unitary process, but the outcome of multiple processing stages distributed across the cerebral cortex,” the authors explain. “Each stage contributes differently, from encoding physical duration to constructing the subjective experience of time.”

In an initial stage, occipital visual areas encode duration through gradual (monotonic) neural responses: the longer the stimulus, the stronger the neural response. This information is then transformed in parietal and premotor regions into selective (unimodal) representations, where distinct neural populations respond preferentially to specific durations, enabling the “readout” of time. Finally, higher-order regions, including the frontal cortex and anterior insula, are involved in the subjective categorization of duration, shaping how time is perceived.

New memristor design uses built-in oxygen gradient to bring stability to reinforcement learning

In a recent study published in Nature Communications, researchers created a memristor that uses a built-in oxygen gradient to produce slow, stable conductance changes, enabling a reinforcement learning (RL) algorithm to learn faster and more stably than conventional approaches.

Reinforcement learning stands as one of the most promising ways to achieve continual learning in AI. The idea is to replicate how biological systems acquire and adapt knowledge slowly over time. The brain achieves this via ion gradients that regulate slow, directional signaling across cell membranes. Replicating this in hardware is a key goal of neuromorphic computing.

With their ability to mimic synaptic behavior, memristors have long been considered strong candidates for this. However, most existing devices suffer from unpredictable, abrupt conductance changes, making sustained and stable learning difficult.

/* */