Toggle light / dark theme

Expanding the IBM Quantum Roadmap to anticipate the future of quantum-centric supercomputing

We’re excited to present an update to the IBM Quantum roadmap, and our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources.


Two years ago, we issued our first draft of that map to take our first steps: our ambitious three-year plan to develop quantum computing technology, called our development roadmap. Since then, our exploration has revealed new discoveries, gaining us insights that have allowed us to refine that map and travel even further than we’d planned. Today, we’re excited to present to you an update to that map: our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone.

Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute. In order to do so, we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.

But first: where did this journey begin? We put the first quantum computer on the cloud in 2016, and in 2017, we introduced an open source software development kit for programming these quantum computers, called Qiskit. We debuted the first integrated quantum computer system, called the IBM Quantum System One, in 2019, then in 2020 we released our development roadmap showing how we planned to mature quantum computers into a commercial technology.

IBM’s massive ‘Kookaburra’ quantum processor might land in 2025

Today’s classical supercomputers can do a lot. But because their calculations are limited to binary states of 0 or 1, they can struggle with enormously complex problems such as natural science simulations. This is where quantum computers, which can represent information as 0, 1, or possibly both at the same time, might have an advantage.

Last year, IBM debuted a 127-qubit computing chip and a structure called the IBM Quantum System Two, intended to house components like the chandelier cryostat, wiring, and electronics for these bigger chips down the line. These developments edged IBM ahead of other big tech companies like Google and Microsoft in the race to build the most powerful quantum computer. Today, the company is laying out its three-year-plan to reach beyond 4,000-qubits by 2025 with a processor it is calling “Kookaburra.” Here’s how it is planning to get there.”


To get to its 2025 goal of a 4,000 qubit plus chip, IBM has micro-milestones it wants to hit on both the hardware and software side.

Cutting the carbon footprint of supercomputing in scientific research

Simon Portegies Zwart, an astrophysicist at Leiden University in the Netherlands, says more efficient coding is vital for making computing greener. While for mathematician and physicist Loïc Lannelongue, the first step is for computer modellers to become more aware of their environmental impacts, which vary significantly depending on the energy mix of the country hosting the supercomputer. Lannelongue, who is based at the University of Cambridge, UK, has developed Green Algorithms, an online tool that enables researchers to estimate the carbon footprint of their computing projects.

Physicists Developed a Superconductor Circuit Long Thought to Be Impossible

By exchanging a classical material for one with unique quantum properties, scientists have made a superconducting circuit that’s capable of feats long thought to be impossible.

The discovery, made by researchers from Germany, the Netherlands, and the US, overturns a century of thought on the nature of superconducting circuits, and how their currents can be tamed and put to practical use.

Low-waste, high-speed circuits based on the physics of superconductivity present a golden opportunity to take supercomputing technology to a whole new level.

Atomic Layer Etching Could Lead to Ever-More Powerful Microchips and Supercomputers

Over the course of almost 60 years, the information age has given the world the internet, smart phones, and lightning-fast computers. This has been made possible by about doubling the number of transistors that can be packed onto a computer chip every two years, resulting in billions of atomic-scale transistors that can fit on a fingernail-sized device. Even individual atoms may be observed and counted within such “atomic scale” lengths.

Physical limit

With this doubling reaching its physical limit, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has joined industry efforts to prolong the process and find new techniques to make ever-more powerful, efficient, and cost-effective chips. In the first PPPL research conducted under a Cooperative Research and Development Agreement (CRADA) with Lam Research Corp., a global producer of chip-making equipment, laboratory scientists properly predicted a fundamental phase in atomic-scale chip production through the use of modeling.

How to build brain-inspired neural networks based on light

Supercomputers are extremely fast, but also use a lot of power. Neuromorphic computing, which takes our brain as a model to build fast and energy-efficient computers, can offer a viable and much-needed alternative. The technology has a wealth of opportunities, for example in autonomous driving, interpreting medical images, edge AI or long-haul optical communications. Electrical engineer Patty Stabile is a pioneer when it comes to exploring new brain-and biology-inspired computing paradigms. “TU/e combines all it takes to demonstrate the possibilities of photon-based neuromorphic computing for AI applications.”

Patty Stabile, an associate professor in the department of Electrical Engineering, was among the first to enter the emerging field of photonic neuromorphic computing.

“I had been working on a proposal to build photonic digital artificial neurons when in 2017 researchers from MIT published an article describing how they developed a small chip for carrying out the same algebraic operations, but in an analog way. That is when I realized that synapses based on analog technology were the way to go for running artificial intelligence, and I have been hooked on the subject ever since.”

Scientists Just Broke The Record For Calculating Pi, And Infinity Never Felt So Close

Circa 2021


Swiss researchers said Monday they had calculated the mathematical constant pi to a new world-record level of exactitude, hitting 62.8 trillion figures using a supercomputer.

“The calculation took 108 days and nine hours” using a supercomputer, the Graubuenden University of Applied Sciences said in a statement.

Its efforts were “almost twice as fast as the record Google set using its cloud in 2019, and 3.5 times as fast as the previous world record in 2020”, according to the university’s Center for Data Analytics, Visualization and Simulation.