Toggle light / dark theme

Circa 2020 o.o…


Google’s Sycamore used to be the world’s fastest quantum computer on the planet, with 54 cubits of quantum computational power. Google declared Quantum Supremacy with Sycamore in October 2019 by running a calculation in 200s that would have taken the world’s fastest supercomputer 10000 years the execute. (in case you’re wondering; Quantum Supremacy is when a quantum computer can complete a task that no supercomputer could achieve.)

The research team at the University of Science and Technology of China ran a similar simulated comparison to its quantum calculation. China’s top quantum computer, dubbed Jiuzhang, completed a calculation in 3 minutes that would have taken TaihuLight, the country’s fastest supercomputer, and third fastest in the world, 2 billion years to complete.

Google and China did not run the same calculations on their systems, so a direct comparison is impossible, but the research team estimates that its quantum computer is 100 trillion times faster than Googles.

A team of physicists from the Harvard-MIT Center for Ultracold Atoms and other universities has developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits, or “qubits.”

The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in , , finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.

“This moves the field into a new domain where no one has ever been to thus far,” said Mikhail Lukin, the George Vasmer Leverett Professor of Physics, co-director of the Harvard Quantum Initiative, and one of the senior authors of the study published today in the journal Nature. “We are entering a completely new part of the quantum world.”

I doubt they were the first to use artificial intelligence in war. But it does discuss the AI technologies used in the recent conflict.

They used AI technology to identify targets for air strikes, specifically to counter the extensive tunnel network of their opponents.


Play War Thunder for FREE! Register using https://wt.link/DefenseUpdates and get a premium tank or aircraft or ship and thee days of premium account time.

Research led by Kent and the STFC Rutherford Appleton Laboratory has resulted in the discovery of a new rare topological superconductor, LaPt3P. This discovery may be of huge importance to the future operations of quantum computers.

Superconductors are vital materials able to conduct electricity without any resistance when cooled below a certain temperature, making them highly desirable in a society needing to reduce its energy consumption.

They manifest quantum properties on the scale of everyday objects, making them highly attractive candidates for building computers that use quantum physics to store data and perform computing operations, and can vastly outperform even the best supercomputers in certain tasks. As a result, there is an increasing demand from leading tech companies like Google, IBM and Microsoft to make quantum computers on an industrial scale using superconductors.

University of Innsbruck researchers have developed a method to make previously hardly accessible properties in quantum systems measurable. The new method for determining the quantum state in quantum simulators reduces the number of necessary measurements and makes work with quantum simulators much more efficient.

In a few years, a new generation of could provide insights that would not be possible using simulations on conventional supercomputers. Quantum simulators are capable of processing a great amount of information since they quantum mechanically superimpose an enormously large number of bit states. For this reason, however, it also proves difficult to read this information out of the quantum . In order to be able to reconstruct the , a very large number of individual measurements are necessary. The method used to read out the quantum state of a quantum simulator is called quantum state tomography.

“Each measurement provides a ‘cross-sectional image’ of the quantum state. You then put these cross-sectional images together to form the complete quantum state,” explains theoretical physicist Christian Kokail from Peter Zoller’s team at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences and the Department of Experimental Physics at the University of Innsbruck. The number of measurements needed in the lab increases very rapidly with the size of the system. “The number of measurements grows exponentially with the number of qubits,” the physicist says. The Innsbruck researchers have now succeeded in developing a much more efficient method for quantum simulators.

Tesla has unveiled its new supercomputer, which is already the fifth most powerful in the world, and it’s going to be the predecessor of Tesla’s upcoming new Dojo supercomputer.

It is being used to train the neural nets powering Tesla’s Autopilot and upcoming self-driving AI.

Over the last few years, Tesla has had a clear focus on computing power both inside and outside its vehicles.

Circa 2019


As quantum computing enters the industrial sphere, questions about how to manufacture qubits at scale are becoming more pressing. Here, Fernando Gonzalez-Zalba, Tsung-Yeh Yang and Alessandro Rossi explain why decades of engineering may give silicon the edge.

In the past two decades, quantum computing has evolved from a speculative playground into an experimental race. The drive to build real machines that exploit the laws of quantum mechanics, and to use such machines to solve certain problems much faster than is possible with traditional computers, will have a major impact in several fields. These include speeding up drug discovery by efficiently simulating chemical reactions; better uses of “big data” thanks to faster searches in unstructured databases; and improved weather and financial-market forecasts via smart optimization protocols.

We are still in the early stages of building these quantum information processors. Recently, a team at Google has reportedly demonstrated a quantum machine that outperforms classical supercomputers, although this so-called “quantum supremacy” is expected to be too limited for useful applications. However, this is an important milestone in the field, testament to the fact that progress has become substantial and fast paced. The prospect of significant commercial revenues has now attracted the attention of large computing corporations. By channelling their resources into collaborations with academic groups, these firms aim to push research forward at a faster pace than either sector could accomplish alone.

“These are novel living machines. They are not a traditional robot or a known species of animals. It is a new class of artifacts: a living and programmable organism,” says Joshua Bongard, an expert in computer science and robotics at the University of Vermont (UVM) and one of the leaders of the find.

As the scientist explains, these living bots do not look like traditional robots : they do not have shiny gears or robotic arms. Rather, they look more like a tiny blob of pink meat in motion, a biological machine that researchers say can accomplish things traditional robots cannot.

Xenobots are synthetic organisms designed automatically by a supercomputer to perform a specific task, using a process of trial and error (an evolutionary algorithm), and are built by a combination of different biological tissues.

Since the DeepSpeed optimization library was introduced last year, it has rolled out numerous novel optimizations for training large AI models—improving scale, speed, cost, and usability. As large models have quickly evolved over the last year, so too has DeepSpeed. Whether enabling researchers to create the 17-billion-parameter Microsoft Turing Natural Language Generation (Turing-NLG) with state-of-the-art accuracy, achieving the fastest BERT training record, or supporting 10x larger model training using a single GPU, DeepSpeed continues to tackle challenges in AI at Scale with the latest advancements for large-scale model training. Now, the novel memory optimization technology ZeRO (Zero Redundancy Optimizer), included in DeepSpeed, is undergoing a further transformation of its own. The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.

ZeRO-Infinity at a glance: ZeRO-Infinity is a novel deep learning (DL) training technology for scaling model training, from a single GPU to massive supercomputers with thousands of GPUs. It powers unprecedented model sizes by leveraging the full memory capacity of a system, concurrently exploiting all heterogeneous memory (GPU, CPU, and Non-Volatile Memory express or NVMe for short). Learn more in our paper, “ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.” The highlights of ZeRO-Infinity include: