Toggle light / dark theme

New Test for Backwards Time Travel Quantum Simulations with Dr. Kater Murch

Researchers at the University of Cambridge have developed simulations based on quantum entanglement that mimic the effects of hypothetical backward time travel, allowing experimentalists to retroactively adjust past actions. By manipulating entangled particles, they aim to solve complex problems in quantum metrology, such as improving experiment outcomes even when optimal conditions are only known after the fact. Although this approach doesn’t allow actual time travel, it uses the principles of quantum mechanics to refine scientific experiments and achieve better results in a controlled and probabilistic manner.

YouTube Membership: / @eventhorizonshow.
Podcast: https://anchor.fm/john-michael-godier
Apple: https://apple.co/3CS7rjT

More JMG
/ johnmichaelgodier.

Want to support the channel?
Patreon: / eventhorizonshow.

Follow us at other places!
@JMGEventHorizon.

Music:

A quantum neural network can see optical illusions like humans do. Could it be the future of AI?

Optical illusions, quantum mechanics and neural networks might seem to be quite unrelated topics at first glance. However, in new research published in APL Machine Learning, I have used a phenomenon called “quantum tunneling” to design a neural network that can “see” optical illusions in much the same way humans do.

My neural network did well at simulating human perception of the famous Necker cube and Rubin’s vase illusions—and in fact better than some much larger conventional used in computer vision.

This work may also shed light on the question of whether (AI) systems can ever truly achieve something like human cognition.

Deriving Fundamental Constants from Three-Beam Collisions

A long-standing prediction of quantum electrodynamics is that high-energy photons can scatter off each other. However, this process has yet to be observed because dedicated experiments have an extremely low signal-to-noise ratio. Now Alexander Macleod at the Extreme Light Infrastructure, Czech Republic, and Ben King at the University of Plymouth, UK, have designed an experiment that could achieve a high-enough signal-to-noise ratio to measure the phenomenon [1]. Researchers could use such measurements to derive the values of fundamental constants in quantum electrodynamics and then set constraints on various extensions to the standard model of particle physics.

Conventionally, scientists have looked for evidence of photon–photon scattering by colliding pairs of laser beams. Macleod and King instead propose colliding three laser beams: an x-ray beam and two high-power optical beams. The two optical beams provide the photons that scatter off each other, and the x-ray beam imparts a momentum kick to the scattered photons. This kick alters the trajectory of the photons and spatially separates them from much of the experimental background. As a result, in the detection region, the signal-to-noise ratio is higher than that of two-beam setups.

Macleod and King consider how their setup could be realized in two currently existing research facilities: the European X-Ray Free-Electron Laser facility in Germany, as part of the planned BIREF@HIBEF experiment, and the SPring-8 Angstrom Compact Free Electron Laser in Japan. They then show how the technology used in these facilities should be sufficient to measure photon–photon scattering. Macleod says that such a demonstration would be important for researchers working on “high-power lasers, strong-field physics, and quantum electrodynamics.”

Different qubit architecture could enable easier manufacturing of quantum computer building blocks

Scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have shown that a type of qubit whose architecture is more amenable to mass production can perform comparably to qubits currently dominating the field. With a series of mathematical analyses, the scientists have provided a roadmap for simpler qubit fabrication that enables robust and reliable manufacturing of these quantum computer building blocks.

New device simplifies manipulation of 2D materials for twistronics

A discovery six years ago took the condensed-matter physics world by storm: Ultra-thin carbon stacked in two slightly askew layers became a superconductor, and changing the twist angle between layers could toggle their electrical properties. The landmark 2018 paper describing “magic-angle graphene superlattices” launched a new field called “twistronics,” and the first author was then-MIT graduate student and recent Harvard Junior Fellow Yuan Cao.

Together with Harvard physicists Amir Yacoby, Eric Mazur, and others, Cao and colleagues have built on that foundational work, smoothing a path for more twistronics science by inventing an easier way to twist and study many types of materials.

A new paper in Nature describes the team’s fingernail-sized machine that can twist thin materials at will, replacing the need to fabricate twisted devices one by one. Thin, 2D materials with properties that can be studied and manipulated easily have immense implications for higher-performance transistors, such as solar cells, and quantum computers, among other things.

Osaka University and RIKEN’s Flagged Weight Optimization Illuminates Color Codes in Quantum Computing

In a recent paper published in PRX Quantum, a team of researchers from Osaka University and RIKEN presented an approach to improve the fault-tolerance of color codes, a type of quantum error correction (QEC) code. Their method, known as Flagged Weight Optimization (FWO), targets the underlying challenges of color-code architectures, which historically suffer from lower thresholds under circuit-level noise. By optimizing the decoder weights based on the outcomes of flag qubits, this method improves the threshold values of color codes.

Color codes are an alternative to surface codes in quantum error correction that implement all Clifford gates transversally, making them a potential solution for low-overhead quantum computing, as noted by the paper. However, their practical use has been limited thus far by the relatively low fault-tolerance thresholds under circuit-level noise. Traditional methods of stabilizer measurement, which involve high-weight stabilizers acting on numerous qubits, introduce substantial circuit depth and errors, ultimately leading to lower overall performance.

The research team focused on two color-code lattices—the (4.8.8) and (6.6.6) color codes. The team noted that while these codes are considered topologically advantageous for QEC, their previous thresholds were relatively low, making them less effective for real-world applications. For example, the threshold for the (4.8.8) color code was previously around 0.14%, limiting its use in fault-tolerant computing.