Toggle light / dark theme

Some of these problems are as simple as factoring a large number into primes. Others are among the most important facing Earth today, like quickly modeling complex molecules for drugs to treat emerging diseases, and developing more efficient materials for carbon capture or batteries.

However, in the next decade, we expect a new form of supercomputing to emerge unlike anything prior. Not only could it potentially tackle these problems, but we hope it’ll do so with a fraction of the cost, footprint, time, and energy. This new supercomputing paradigm will incorporate an entirely new computing architecture, one that mirrors the strange behavior of matter at the atomic level—quantum computing.

For decades, quantum computers have struggled to reach commercial viability. The quantum behaviors that power these computers are extremely sensitive to environmental noise, and difficult to scale to large enough machines to do useful calculations. But several key advances have been made in the last decade, with improvements in hardware as well as theoretical advances in how to handle noise. These advances have allowed quantum computers to finally reach a performance level where their classical counterparts are struggling to keep up, at least for some specific calculations.

The operation of a quantum computer relies on encoding and processing information in the form of quantum bits—defined by two states of quantum systems such as electrons and photons. Unlike binary bits used in classical computers, quantum bits can exist in a combination of zero and one simultaneously—in principle allowing them to perform certain calculations exponentially faster than today’s largest supercomputers.

Even the best AI large language models (LLMs) fail dramatically when it comes to simple logical questions. This is the conclusion of researchers from the Jülich Supercomputing Center (JSC), the School of Electrical and Electronic Engineering at the University of Bristol and the LAION AI laboratory.

In their paper posted to the arXiv preprint server, titled “Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models,” the scientists attest to a “dramatic breakdown of function and reasoning capabilities” in the tested state-of-the-art LLMs and suggest that although language models have the latent ability to perform basic reasoning, they cannot access it robustly and consistently.

The authors of the study—Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti and Jenia Jitsev—call on “the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of the current generation of LLMs.” They also call for the development of standardized benchmarks to uncover weaknesses in language models related to basic reasoning capabilities, as current tests have apparently failed to reveal this serious failure.

Researchers from the RIKEN Center for Computational Science (Japan) and the Max Planck Institute for Evolutionary Biology (Germany) have published new findings on how social norms evolve over time. They simulated how norms promote different social behavior, and how the norms themselves come and go. Because of the enormous number of possible norms, these simulations were run on RIKEN’s Fugaku, one of the fastest supercomputers worldwide.

As we have alluded to numerous times when talking about the next “AI” trade, data centers will be the “factories of the future” when it comes to the age of AI.

That’s the contention of Chris Miller, the author of Chip War, who penned a recent opinion column for Financial Times noting that ‘chip wars’ could very soon become ‘cloud wars’

He points out that the strategic use of high-powered computing dates back to the Cold War when the US allowed the USSR limited access to supercomputers for weather forecasting, not nuclear simulations.