Toggle light / dark theme

Biological materials are made of individual components, including tiny motors that convert fuel into motion. This creates patterns of movement, and the material shapes itself with coherent flows by constant consumption of energy. Such continuously driven materials are called active matter.

The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand the shape, flow, and form of living materials. The active matter theory consists of many challenging mathematical equations.

Scientists from the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, the Center for Systems Biology Dresden (CSBD), and the TU Dresden have now developed an algorithm, implemented in an open-source supercomputer code, that can for the first time solve the equations of active matter theory in realistic scenarios.

Undeterred after three decades of looking, and with some assistance from a supercomputer, mathematicians have finally discovered a new example of a special integer called a Dedekind number.

Only the ninth of its kind, or D, it is calculated to equal 286 386 577 668 298 411 128 469 151 667 598 498 812 366, if you’re updating your own records. This 42 digit monster follows the 23-digit D discovered in 1991.

Grasping the concept of a Dedekind number is difficult for non-mathematicians, let alone working it out. In fact, the calculations involved are so complex and involve such huge numbers, it wasn’t certain that D would ever be discovered.

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers.

Argonne National Laboratory (ANL) is creating a generative AI model called AuroraGPT and is pouring a giant mass of scientific information into creating the brain.

The model is being trained on its Aurora supercomputer, which delivers more than an half an exaflop performance at ANL. The system has Intel’s Ponte Vecchio GPUs, which provide the main computing power.

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

In this article we look at several introductions of digital storage related products at the 2023 Supercomputing Conference.


WDC was also showing its hybrid storage JBOD Ultrastar Data102 and Data60 platforms to support disaggregated storage and software-defined storage (SDS). This comes in dual-port SAS or single-port SATA configurations. The Data102 has storage capacities up to 2.65PB and the Data60 has up to 1.56TB in a 4U enclosure that includes IsoVibe and ArticFlow technologies for improved performance and reliability. The Data102 and Data60 capacity numbers assume using 26TB SMR HDDs.

WDC was also showing a GPUDirect storage proof of concept combining the company’s RaidFlex technology with Ingrasys ES2100 with integrated NVIDIA Spectrum Ethernet switches as well as NVIDIA’s GPUs, Magnum IO GPUDirect storage, BlueField DPUs and ConnectX SmartNICs. The proof-of-concept demonstration can provide 25GB/s bandwidth for a single NVIDIA A100 Tensor Core GPU and over 100GB/s for four NVIDIA A100 GPUs.

At SC23 Arcitecta and DDN introduced software defined storage solutions for AI and cloud applications. WDC was also showing SDS, its OpenFlex NVMe storage and GPUDirect storage.

The only AI Hardware startup to realize revenue exceeding $100M has finished the first phase of Condor Galaxy 1 AI Supercomputer with partner G42 of the UAE. Other Cerebras customers are sharing their CS-2 results at Supercomputing ‘23, building momentum for the inventor of wafer-scale computing. This company is on a tear.

Four short months ago, Cerebras announced the most significant deal any AI startup has been able to assemble with partner G42 (Group42), an artificial intelligence and cloud computing company. The eventual 256 CS-2 wafer-scale nodes with 36 Exaflops of AI performance will be one of the world’s largest AI supercomputers, if not the largest.

Cerebras has now finished the first data center implementation and started on the second. These two companies are moving fast to capitalize on the $70B (2028) gold rush to stand up Large Language Model services to researchers and enterprises, especially while the supply of NVIDIA H100 remains difficult to obtain, creating an opportunity for Cerebras. In addition, Cerebras has recently announced it has released the largest Arabic Language Model, the Jais30B with Core42 using the CS-2, a platform designed to make the development of massive AI models accessible by eliminating the need to decompose and distribute the problem.

Optical tweezers manipulate tiny things like cells and nanoparticles using lasers. While they might sound like tractor beams from science fiction, the fact is their development garnered scientists a Nobel Prize in 2018.

Scientists have now used supercomputers to make optical tweezers safer to use on living cells with applications to cancer therapy, environmental monitoring, and more.

“We believe our research is one significant step closer towards the industrialization of optical tweezers in biological applications, specifically in both selective cellular surgery and targeted drug delivery,” said Pavana Kollipara, a recent graduate of The University of Texas at Austin.

The new chips were designed to be less powerful than the models sold in the US, according to sources.

NVIDIA really, really doesn’t want to lose access to China’s massive AI chip market.


NVIDIA really, really doesn’t want to lose access to China’s massive AI chip market. The company is developing three new AI chips especially for China that don’t run afoul of the latest export restrictions in the US, according to The Wall Street Journal and Reuters. Last year, the US government notified the chipmaker that it would restrict the export of computer chips meant for supercomputers and artificial intelligence applications to Russia and China due to concerns that the components could be used for military purposes. That rule prevented NVIDIA from selling certain A100 and H100 chips in the country, so it designed the A800 and H800 chips specifically for the Chinese market.

However, the US government recently issued an updated set of restrictions that puts a limit on how much computing power a chip can have when it’s meant for export to the aforementioned countries. The A800 and the H800 are no longer eligible for export under the new rules, along with NVIDIA’s other products, which include its top-of-the-line RTX 4090 consumer GPU. Some reports even suggest that the company could end up canceling over $5 billion worth of advanced chip orders in China.

The new chips meant for the Chinese market are called the HGX H20, the L20 and the L2, based on the specs sent to distributors. While the H20 is supposed to be the most powerful model out of the three, all of them don’t go beyond the computing power threshold set by the US government’s new export rules. That means customers using them for AI applications may need more chips than they would if they had access to higher-spec models.

“Othello is now solved.” With that summation, a researcher at a Japanese computer company confirmed yet another milestone in supercomputing achievement.

Othello, a 140-year-old game rooted in the Shakespearean drama of the same name that depicts conflict between the Moor of Venice and Desdemona, does not seem complex at first glance. It is played on a board with black and white disks strategically positioned in squares along eight rows and eight columns.

The challenge, according to bioinformatician Hiroki Takizawa, is to conceive a game plan “with no mistake made by either player.”

A method developed at the University of Duisburg-Essen makes it possible to read data from noisy signals. Theoretical physicists and their experimental colleagues have published their findings in the current issue of Physical Review Research. The method they describe could also be significant for quantum computers.

You know it from the car radio: The weaker the signal, the more disturbing the . This is even more true for laboratory measurements. Researchers from the Collaborative Research Center 1,242 and the Center for Nanointegration (CENIDE) at the University of Duisburg-Essen (UDE) have now described a method for extracting data from noise.

What is a bit in a conventional computer, i.e., state 1 (current on) or state 0 (current off), is taken over in the quantum computer by the quantum bits, or qubits for short. To do this, they need defined and distinguishable states, but they can overlap at the same time and therefore enable many times the computing power of a current computer. This means they could also be used where today’s supercomputers are overtaxed, for example in searching extremely large databases.