Toggle light / dark theme

Not many devices in the datacenter have been etched with the Intel 4 process, which is the chip maker’s spin on 7 nanometer extreme ultraviolet immersion lithography. But Intel’s Loihi 2 neuromorphic processor is one of them, and Sandia National Laboratories is firing up a supercomputer with 1,152 of them interlinked to create what Intel is calling the largest neuromorphic system every assembled.

With Nvidia’s top-end “Blackwell” GPU accelerators now pushing up to 1,200 watts in their peak configurations, and require liquid cooling, and other accelerators no doubt following as their sockets get inevitably bigger as Moore’s Law scaling for chip making slows, this is a good time to take a step back and see what can be done with a reasonably scaled neuromorphic system, which not only has circuits which act more like real neurons used in real brains and also burn orders of magnitude less power than the XPUs commonly used in the datacenter for all kinds of compute.

An intricate simulation performed by UT Southwestern Medical Center researchers using one of the world’s most powerful supercomputers sheds new light on how proteins called SNAREs cause biological membranes to fuse.

Their findings, reported in the Proceedings of the National Academy of Sciences, suggest a new mechanism for this ubiquitous process and could eventually lead to new treatments for conditions in which is thought to go awry.

“Biology textbooks say that SNAREs bring membranes together to cause fusion, and many people were happy with that explanation. But not me, because membranes brought into contact normally do not fuse. Our simulation goes deeper to show how this important process takes place,” said study leader Jose Rizo-Rey (“Josep Rizo”), Ph.D., Professor of Biophysics, Biochemistry, and Pharmacology at UT Southwestern.

NVIDIA is all set to aid Japan in building the nation’s hybrid quantum supercomputer, fueled by the immense power of its HPC & AI GPUs.

Japan To Rapidly Progressing In Quantum and AI Computing Segments Through Large-Scale Developments With The Help of NVIDIA’s AI & HPC Infrastructure

Nikkei Asia reports that the National Institute of Advanced Industrial and Technology (AIST), Japan, is building a quantum supercomputer to excel in this particular segment for prospects. The new project is called ABCI-Q & will be entirely powered by NVIDIA’s accelerated & quantum computing platforms, hinting towards high-performance and efficiency results out of the system. The Japanese supercomputer will be built in collaboration with Fujitsu as well.

Tesla’s Dojo supercomputer represents a significant investment and commitment to innovation in the field of AI computation, positioning Tesla as a key player in shaping the future of neural net hardware.

Questions to inspire discussion.

What is Tesla’s Dojo supercomputer?
—Tesla’s Dojo supercomputer is an innovative approach to training neural networks, potentially surpassing Nvidia in AI computation.

Today is the ribbon-cutting ceremony for the “Venado” supercomputer, which was hinted at back in April 2021 when Nvidia announced its plans for its first datacenter-class Arm server CPU and which was talked about in some detail – but not really enough to suit our taste for speeds and feeds – back in May 2022 by the folks at Los Alamos National Laboratory where Venado is situated.

Now we can finally get more details on the Venado system and get a little more insight into how Los Alamos will put it to work, and more specifically, why a better balance of memory bandwidth and compute that depends upon it is perhaps more important to this lab than it is in other HPC centers of the world.

Los Alamos was founded back in 1943 as the home of the Manhattan Project that created the world’s first nuclear weapons. We did not have supercomputers back then, of course, but plenty of very complex calculations have always been done at Los Alamos; sometimes by hand, sometimes by tabulators from IBM that used punch cards to store and manipulate data – an early form of simulation. The first digital computer to do such calculations at Los Alamos was called MANIAC and was installed in 1952; it could perform 10,000 operations per second and ran Monte Carlo simulations, which use randomness to simulate what are actually deterministic processes.