Toggle light / dark theme

“The projects running on Aurora represent some of the most ambitious and innovative science happening today,” said Katherine Riley, ALCF director of science. “From modeling extremely complex physical systems to processing huge amounts of data, Aurora will accelerate discoveries that deepen our understanding of the world around us.”

On the hardware side, Aurora clearly impresses. The supercomputer comprises 166 racks, each holding 64 blades, for a total of 10,624 blades. Each blade contains two Xeon Max processors with 64 GB of HBM2E memory onboard and six Intel Data Center Max ‘Ponte Vecchio’ GPUs, all cooled by a specialized liquid-cooling system.

In total, Aurora has 21,248 CPUs with over 1.1 million high-performance x86 cores, 19.9 PB of DDR5 memory, and 1.36 PB of HBM2E memory attached to the CPUs. It also features 63,744 GPUs optimized for AI and HPC equipped with 8.16 PB of HBM2E memory. Aurora uses 1,024 nodes with solid-state drives for storage, offering 220 PB of total capacity and 31 TB/s of bandwidth. The machine relies on HPE’s Shasta supercomputer architecture with Slingshot interconnects.

Scientists have created over a million simulated cosmic images using the power of supercomputers to anticipate the capabilities of NASA

NASA, the National Aeronautics and Space Administration, is the United States government agency responsible for the nation’s civilian space program and for aeronautics and aerospace research. Established in 1958 by the National Aeronautics and Space Act, NASA has led the U.S. in space exploration efforts, including the Apollo moon-landing missions, the Skylab space station, and the Space Shuttle program.

Spacecraft powered by electric propulsion could soon be better protected against their own exhaust, thanks to new supercomputer simulations.

Electric propulsion is a more efficient alternative to traditional chemical rockets, and it’s being increasingly used on space missions, starting off with prototypes on NASA’s Deep Space 1 and the European Space Agency’s SMART-1 in 1998 and 2003, respectively, and subsequently finding use on flagship science missions such as NASA’s Dawn and Psyche missions to the asteroid belt. There are even plans to use electric propulsion on NASA’s Lunar Gateway space station.

El Capitan can reach a peak performance of 2.746 exaFLOPS, making it the National Nuclear Security Administration’s first exascale supercomputer. It’s the world’s third exascale machine after the Frontier supercomputer at Oak Ridge National Laboratory in Tennessee and the Aurora supercomputer at the Argonne Leadership Computing Facility, also in Illinois.

The world’s fastest supercomputer is powered by more than 11 million CPU and GPU cores integrated into 43,000+ AMD Instinct MI300A accelerators. Each MI300A APU comprises an EPYC Genoa 24-core CPU clocked at 1.8GHz and a CDNA3 GPU integrated onto a single organic package, along with 128GB of HBM3 memory.

As the capabilities of generative AI models have grown, you’ve probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.

More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate , and the MIT-assisted “RFdiffusion,” for example, can help design new proteins.

One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics—a technique known as —can be very expensive, requiring billions of time steps on supercomputers.

At Argonne National Laboratory, scientists have leveraged the Frontier supercomputer to create an unprecedented simulation of the universe, encompassing a span of 10 billion light years and incorporating complex physics models.

This monumental achievement allows for new insights into galaxy formation and cosmic evolution, showcasing the profound capabilities of exascale computing.

Breakthrough in Universe Simulation.

Researchers have pioneered the use of parallel computing on graphics cards to simulate acoustic turbulence. This type of simulation, which previously required a supercomputer, can now be performed on a standard personal computer. The discovery will make weather forecasting models more accurate while enabling the use of turbulence theory in various fields of physics, such as astrophysics, to calculate the trajectories and propagation speeds of acoustic waves in the universe. The research was published in Physical Review Letters.

Turbulence is the complex chaotic behavior of fluids, gases or nonlinear waves in various physical systems. For example, at the ocean surface can be caused by wind or wind-drift currents, while turbulence of laser radiation in optics occurs as light is scattered by lenses. Turbulence can also occur in sound waves that propagate chaotically in certain media, such as superfluid helium.

In the 1970s, Soviet scientists proposed that turbulence occurs when sound waves deviate from equilibrium and reach large amplitudes. The theory of wave turbulence applies to many other wave systems, including magnetohydrodynamic waves in the ionospheres of stars and giant planets, and perhaps even in the early universe. Until recently, however, it has been nearly impossible to predict the propagation patterns of nonlinear (i.e., chaotically moving) acoustic and other waves because of the high computational complexity involved.