Though quantum mechanics is an incredibly successful theory, nobody knows what it means. Scientists now must confront philosophy.

Quantum computers are highly energy-efficient and extremely powerful supercomputers. But for these machines to realize their full potential in new applications like artificial intelligence or machine learning, researchers are hard at work at perfecting the underlying electronics to process their calculations. A team at Fraunhofer IZM are working on superconducting connections that measure a mere ten micrometers in thickness, moving the industry a substantial step closer to a future of commercially viable quantum computers.
With the extreme computing power they promise, quantum computers have the potential to become the driving force for technological innovations in all areas of modern industry. By contrast with the run-of-the-mill computers of today, they do not work with bits, but with qubits: No longer are these units of information restricted to the binary states of 1 or 0.
With quantum superposition or entanglement added, qubits mean a great leap forward in terms of sheer speed and power and the complexity of the calculations they can handle. One simple rule still holds, though: More qubits mean more speed and more computing power.
In the physical world, time marches in one direction, but things aren’t so straight forward in the quantum realm. Researchers have discovered that it’s possible to speed up, slow down, or reverse the flow of time in a quantum system. This isn’t exactly time travel, but is instead implementing or reverting to different quantum states from different points in time.
The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST in July 2022 for post-quantum cryptography has been broken. Researchers from the KTH Royal Institute of Technology, Stockholm, Sweden, used recursive training AI combined with side channel attacks.
A side-channel attack exploits measurable information obtained from a device running the target implementation via channels such as timing or power consumption. The revolutionary aspect of the research (PDF) was to apply deep learning analysis to side-channel differential analysis.
“Deep learning-based side-channel attacks,” say the researchers, “can overcome conventional countermeasures such as masking, shuffling, random delays insertion, constant-weight encoding, code polymorphism, and randomized clock.”
Today’s news from the frontier of quantum computing includes Amazon Web Services’ release of cloud-based simulation software for modeling the electromagnetic properties of quantum hardware, Google’s latest technological advance aimed at lowering the error rate of quantum calculations, and new recommendations about the public sector’s role on the frontier.
Amazon opens a ‘Palace’ for designers
Amazon Web Services is introducing an open-source software platform called Palace (which stands for Pa rallel, La rge-Scale Computational Electromagnetics) that can perform 3D simulations of complex electromagnetic models and enable the design of quantum computing hardware. The code is available via GitHub and can be used in conjunction with AWS ParallelCluster.
Researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have created a new type of quantum material whose atomic scaffolding, or lattice, has been dramatically warped into a herringbone pattern.
The resulting distortions are “huge” compared to those achieved in other materials, said Woo Jin Kim, a postdoctoral researcher at the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC who led the study.
“This is a very fundamental result, so it’s hard to make predictions about what may or may not come out of it, but the possibilities are exciting,” said SLAC/Stanford Professor and SIMES Director Harold Hwang.
When forming an image of an object, such as a photograph taken by a cell phone, light that has interacted with the object and either passed through or bounced off it is captured by the detector in the phone.
Some 25 years ago, scientists devised another, less direct way to do this. In the conventional form, information gathered from two detectors are instead used, by combining information from one capturing the light that has interacted with the object and one that has not interacted with the object at all. It is the light that has never interacted with the object that is used to obtain the image, though, resulting the technique taking on the name “ghost imaging.”
When entangled light is used, the quantum properties can be exploited to do this at very low light levels which can be a large advantage when looking at light-sensitive samples in biological imaging where too much light can damage or change the sample and thus destroying what one wishes to look at—this being quite a conundrum in the field.