Toggle light / dark theme

IBM has just unveiled its boldest quantum computing roadmap yet: Starling, the first large-scale, fault-tolerant quantum computer—coming in 2029. Capable of running 20,000X more operations than today’s quantum machines, Starling could unlock breakthroughs in chemistry, materials science, and optimization.

According to IBM, this is not just a pie-in-the-sky roadmap: they actually have the ability to make Starling happen.

In this exclusive conversation, I speak with Jerry Chow, IBM Fellow and Director of Quantum Systems, about the engineering breakthroughs that are making this possible… especially a radically more efficient error correction code and new multi-layered qubit architectures.

We cover:
- The shift from millions of physical qubits to manageable logical qubits.
- Why IBM is using quantum low-density parity check (qLDPC) codes.
- How modular quantum systems (like Kookaburra and Cockatoo) will scale the technology.
- Real-world quantum-classical hybrid applications already happening today.
- Why now is the time for developers to start building quantum-native algorithms.

00:00 Introduction to the Future of Computing.
01:04 IBM’s Jerry Chow.
01:49 Quantum Supremacy.
02:47 IBM’s Quantum Roadmap.
04:03 Technological Innovations in Quantum Computing.
05:59 Challenges and Solutions in Quantum Computing.
09:40 Quantum Processor Development.
14:04 Quantum Computing Applications and Future Prospects.
20:41 Personal Journey in Quantum Computing.
24:03 Conclusion and Final Thoughts.

A ground-breaking recent development by scientists from the U.S. National Science Foundation (NSF) National Solar Observatory (NSO), and New Jersey Institute of Technology (NJIT), is changing that by using adaptive optics to remove the blur.

From smartphones and TVs to credit cards, technologies that manipulate light are deeply embedded in our daily lives, many of which are based on holography. However, conventional holographic technologies have faced limitations, particularly in displaying multiple images on a single screen and in maintaining high-resolution image quality.

Recently, a research team led by Professor Junsuk Rho at POSTECH (Pohang University of Science and Technology) has developed a groundbreaking metasurface technology that can display up to 36 high-resolution images on a surface thinner than a human hair. This research has been published in Advanced Science.

This achievement is driven by a special nanostructure known as a metasurface. Hundreds of times thinner than a human hair, the metasurface is capable of precisely manipulating light as it passes through. The team fabricated nanometer-scale pillars using silicon nitride, a material known for its robustness and excellent optical transparency. These pillars, referred to as meta-atoms, allow for fine control of light on the metasurface.

Mitigating climate change is prompting all manner of changes: from the rapid transition to EVs to an explosion in renewables capacity. But these changes must be underpinned by a transformation of electricity grids to accommodate an energy sector that looks very different to how it does today.

The current conventional wisdom on deep neural networks (DNNs) is that, in most cases, simply scaling up a model’s parameters and adopting computationally intensive architectures will result in large performance improvements. Although this scaling strategy has proven successful in research labs, real-world industrial deployments introduce a number of complications, as developers often need to repeatedly train a DNN, transmit it to different devices, and ensure it can perform under various hardware constraints with minimal accuracy loss.

The research community has thus become increasingly interested in reducing such models’ storage size on devices while also improving their run-time. Explorations in this area have tended to follow one of two avenues: reducing model size via compression techniques, or using model pruning to reduce computation burdens.

In the new paper LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification, a team from the University of Maryland and Google Research proposes a way to “bridge the gap” between the two approaches with LilNetX, an end-to-end trainable technique for neural networks that jointly optimizes model parameters for accuracy, model size on the disk, and computation on any given task.

A team of astronomers led by Michael Janssen (Radboud University, The Netherlands) has trained a neural network with millions of synthetic black hole data sets. Based on the network and data from the Event Horizon Telescope, they now predict, among other things, that the black hole at the center of our Milky Way is spinning at near top speed.

The astronomers have published their results and methodology in three papers in the journal Astronomy & Astrophysics.

In 2019, the Event Horizon Telescope Collaboration released the first image of a supermassive black hole at the center of the galaxy M87. In 2022, they presented an image of the black hole in our Milky Way, Sagittarius A*. However, the data behind the images still contained a wealth of hard-to-crack information. An international team of researchers trained a neural network to extract as much information as possible from the data.

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods — NVIDIA/physicsnemo

The National Institute of Information and Communications Technology of Japan, in collaboration with Sony Semiconductor Solutions Corporation (Sony), has developed the world’s first practical surface-emitting laser that employs quantum dot (QD) as the optical gain medium for use in optical fiber communication systems.

This achievement was made possible by NICT’s high-precision technology and Sony’s advanced semiconductor processing technology. The surface-emitting laser developed in this study incorporates nanoscale semiconductor structures called as light-emitting materials. This innovation not only facilitates the miniaturization and reduced power consumption of light sources in optical fiber communications systems but also offers potential cost reductions through and enhanced output via integration.

The results of this research are published in Optics Express.

A team of researchers at the Facility for Rare Isotope Beams (FRIB) at Michigan State University (MSU) has discovered that cobalt-70 isotopes form different nuclear shapes when their energy levels differ only slightly. The findings, published in Nature Communications Physics, shed light on the dynamic, complex nature of exotic nuclear particles.

The team included Artemis Spyrou, professor of physics at the Facility for Rare Isotope Beams (FRIB) and in the MSU Department of Physics and Astronomy, Sean Liddick, associate professor of chemistry at FRIB and in the MSU Department of Chemistry and Experimental Nuclear Science Department head at FRIB, Alex Brown, professor of physics at FRIB, and Cade Dembski, former FRIB student research assistant. Dembski, now working on his Ph.D. at the University of Notre Dame, served as the paper’s lead author.

“When we first started this project, it was motivated by the astrophysical side of nuclear science research, instead of focusing on ,” Dembski said. “As we continued with our , though, we couldn’t quite understand all of the patterns we were seeing. It turned out the reason was due to some interesting nuclear structure effects that we were not expecting, and we ended up writing the paper about those effects.”