Physicists have transformed a decades-old technique for simplifying quantum equations into a reusable, user-friendly “conversion table” that works on a laptop and returns results within hours.
Quantum computing devices of increasing complexity are becoming more and more reliant on automatised tools for design, optimization and operation. In this Review, the authors discuss recent developments in AI for quantum”, from hardware design and control, to circuit compiling, quantum error correction and postprocessing, and discuss future potential of quantum accelerated supercomputing, where AI, HPC, and quantum technologies converge.
NVIDIA’s CEO was surprisingly spotted on the Joe Rogan podcast, and one of the interesting stories he mentioned was how the interest in NVIDIA’s first AI machine was almost nonexistent.
Jensen, appearing on the ‘Joe Rogan Experience’ platform, was something that I wasn’t expecting at all, but it appears that NVIDIA’s CEO has become a mainstream personality, not just at the AI front, but also for the entire tech world. Jensen Huang talked about various aspects of his life and the journey of NVIDIA over the years, but one of the more interesting statements was around how Team Green spent ‘billions’ creating the very first DGX-1 AI system, but when Jensen went out to the market, the interest around the machine was ‘zero’, until Elon stepped up.
And when I announced DGX-1, nobody in the world wanted it. I had no purchase orders, not one. Nobody wanted to buy it. Nobody wanted to be part of it. Except for Elon.
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
Mistral Large 3 is a mixture-of-experts (MoE) model — i nstead of firing up every neuron for every token, it only activates the parts of the model with the most impact. The result is efficiency that delivers scale without waste, accuracy without compromise and makes enterprise AI not just possible, but practical.
Mistral AI’s new models deliver industry-leading accuracy and efficiency for enterprise AI. It will be available everywhere, from the cloud to the data center to the edge, starting Tuesday, Dec. 2.
Most of us first hear about the irrational number π (pi)—rounded off as 3.14, with an infinite number of decimal digits—in school, where we learn about its use in the context of a circle. More recently, scientists have developed supercomputers that can estimate up to trillions of its digits.
Now, physicists at the Center for High Energy Physics (CHEP), Indian Institute of Science (IISc) have found that pure mathematical formulas used to calculate the value of pi 100 years ago has connections to fundamental physics of today—showing up in theoretical models of percolation, turbulence, and certain aspects of black holes.
The research is published in the journal Physical Review Letters.
More than 1 million Americans live with tremors, slowed movement and speech changes caused by Parkinson’s disease—a degenerative and currently incurable condition, according to the Parkinson’s Foundation and the Mayo Clinic. Beyond the emotional toll on patients and families, the disease also exerts a heavy financial burden. In California alone, researchers estimate that Parkinson’s costs the state more than 6 billion dollars in health care expenses and lost productivity.
Scientists have long sought to understand the deeper brain mechanisms driving Parkinson’s symptoms. One long-standing puzzle involved an unusual surge of brain activity known as beta waves—electrical oscillations around 15 Hertz observed in patients’ motor control centers. Now, thanks to supercomputing resources provided by the U.S. National Science Foundation’s ACCESS program, researchers may have finally discovered what causes these waves to spike.
Using ACCESS allocations on the Expanse system at the San Diego Supercomputer Center—part of UC San Diego’s new School of Computing, Information, and Data Sciences—researchers with the Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network modeled how specific brain cells malfunction in Parkinson’s disease. Their findings could pave the way for more targeted treatments.
As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.
What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.
Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)
The Mexican government will build a supercomputer with a processing capacity seven times greater than the current most powerful computer in Latin America, officials responsible for the project said Wednesday.
Named Coatlicue, after a goddess in Aztec mythology representing the source of power and life, the computer will have a processing capacity of 314 petaflops.
“We want it to be a public supercomputer, a supercomputer for the people,” President Claudia Sheinbaum told reporters.
From the rise of numerical and symbolic computing to the future of AI, this talk traces five decades of breakthroughs and the challenges ahead.
Bill is the author of Berkeley UNIX, cofounder of Sun Microsystems, author of “Why the Future Doesn’t Need Us” (Wired 2000), ex-cleantech VC at Kleiner Perkins, investor in and unpaid advisor to Nodra. AI.
Talk Details.
50 Years of Advancements: Computing and Technology 1975–2025 (and beyond)
I came to UC Berkeley CS in 1975 as a graduate student expecting to do computer theory— Berkeley CS didn’t have a proper departmental computer, and I was tired of coding, having written a lot of numerical code for early supercomputers.
But it’s hard to make predictions, especially about the future. Berkeley soon had a Vax superminicomputer, I installed a port of UNIX and was upgrading the operating system, and the Internet and Microprocessor boom beckoned.
Supercomputers are rewriting our understanding of Enceladus’ icy plumes and the mysterious ocean that may harbor life beneath them. Cutting-edge simulations show that Enceladus’ plumes are losing 20–40% less mass than earlier estimates suggested. The new models provide sharper insights into subsurface conditions that future landers may one day probe directly.
In the 17th century, astronomers Christiaan Huygens and Giovanni Cassini pointed some of the earliest telescopes at Saturn and made a surprising discovery. The bright structures around the planet were not solid extensions of the world itself, but separate rings formed from many thin, nested arcs.
Centuries later, NASA’s Cassini-Huygens (Cassini) mission carried that exploration into the space age. Starting in 2005, the spacecraft returned a flood of detailed images that reshaped scientists’ view of Saturn and its moons. One of the most dramatic findings came from Enceladus, a small icy moon where towering geysers shot material into space, creating a faint sub-ring around Saturn made of the ejected debris.