Toggle light / dark theme

The Aurora supercomputer at Argonne National Laboratory in Lemont, IL, could soon be the world’s fastest. It could revolutionize climate forecasting.

LEMONT, Ill. (WLS) — This is what scientists at Argonne National Laboratory in Lemont call a node: six huge graphics processors and two large CPUs cooled with water to make major calculations a cinch.

Argonne’s new supercomputer doesn’t just have one node, 10 or 100, instead it has 10,000 of them. Each single rack of nodes weighs eight tons and are cooled by thousands of gallons of water.

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

I’ve done some videos lately on the 8,008 CPU, widely regarded as the world’s first 8-bit programmable microprocessor. Previously I built a nice little single board computer. In this video I connect eight of these 8,008 microprocessors together, designate one as a controller, design a shared memory abstraction between then, and use them to solve a simple parallel computing program — Conway’s Game of Life. Using my simple straightforward assembly implementation of Conway’s, I was about to show that the seven CPUs (one controller, 6 workers) worked together to solve the problem significantly faster than the single processor alone. The 8,008 debuted commercially in the early 1970s. It’s a physically small chip, only 18 pins, and requires a triplexed address and data bus. The clock rate is 500 KHz and the instruction set is fairly limited. Nevertheless, you can do a lot with this little CPU. For more vintage computer projects, see https://www.smbaker.com/.

Researchers at the University of Pennsylvania have developed a new computer chip that uses light instead of electricity. This could improve the training of artificial intelligence (AI) models by improving the speed of data transfer and, more efficiently, reducing the amount of electricity consumed.

Humanity is building the exascale supercomputers today that can carry out a quintillion computations per second. While the scale of the computation may have increased, computing technology is still working on the principles that were first used in the 1960s.

Researchers have been working on developing computing systems based on quantum mechanics, too, but these computers are at least a few years from becoming widely available if not more. The recent explosion of AI models in technology has resulted in a demand for computers that can process large sets of information. The inefficient computing systems, though, result in high consumption of energy.

Basic biology textbooks will tell you that all life on Earth is built from four types of molecules: proteins, carbohydrates, lipids, and nucleic acids. And each group is vital for every living organism.

But what if humans could actually show that these “molecules of life,” such as amino acids and DNA bases, can be formed naturally in the right environment? Researchers at the University of Florida are using the HiPerGator—the in U.S. higher education—to test this experiment.

HiPerGator—with its AI models and vast capacity for graphics processing units, or GPUs (specialized processors designed to accelerate graphics renderings)—is transforming the molecular research game.

Using this technique, even a non-conducting material like glass could be turned into a conductor some day feel researchers.


A collaboration between scientists at the University of California, Irvine (UCI) and Los Alamos National Laboratory (LANL) has developed a method that converts everyday materials into conductors that can be used to build quantum computers, a press release said.

Computing devices that are ubiquitous today are built of silicon, a semiconductor material. Under certain conditions, silicon behaves like a conducting material but has limitations that impact its ability to compute larger numbers. The world’s fastest supercomputers are built by putting together silicon-based components but are touted to be slower than quantum computers.

Quantum computers do not have the same limitations of silicon-based ocmputing and prototypes being built today can compute in seconds what supercomputers would take years to complete. This can open up a whole new level of computing prowess if they could be built and operated with easier-to-work material. Researchers at UCI have been working to determine how high-quality quantum materials can be obtained. They have now found a simpler way to make them from everyday materials.

Simulating KH-, RT-, or RM-driven mixing using direct numerical simulations (DNS) can be prohibitively expensive because all the spatial and temporal scales have to be resolved, making approaches such as Reynolds-averaged Navier–Stokes (RANS) often the more favorable engineering option for applications like ICF. To this day, no DNS has been performed for ICF even on the largest supercomputers, as the resolution requirements are too stringent.8 However, RANS approaches also face their own challenges: RANS is based on the Reynolds decomposition of a flow where mean quantities are intended to represent an average over an ensemble of realizations, which is often replaced by a spatial average due to the scarcity of ensemble datasets. Replacing ensemble averages by space averages may be appropriate for flows that are in homogenous-, isotropic-, and fully developed turbulent states in which spatial, temporal, and ensemble averaging are often equivalent. However, most HED hydrodynamic experiments involve transitional periods in which the flow is neither homogeneous nor isotropic nor fully developed but may contain large-scale unsteady dynamics; thus, the equivalency of averaging can no longer be assumed. Yet, RANS models often still require to be initialized in such states of turbulence, and knowing how and when to initialize them in a transitional state is, therefore, challenging and is still poorly understood.

The goal of this paper is to develop a strategy allowing the initialization of a RANS model to describe an unsteady transitional RM-induced flow. We seek to examine how ensemble-averaged quantities evolve during the transition to turbulence based on some of the first ensemble experiments repeated under HED conditions. Our strategy involves using 3D high-resolution implicit large eddy simulations (ILES) to supplement the experiments and both initialize and validate the RANS model. We use the Besnard–Harlow–Rauenzahn (BHR) model,9–12 specifically designed to predict variable-density turbulent physics involved in flows like RM. Previous studies have considered different ways of initializing the BHR model.

We are witnessing a professional revolution where the boundaries between man and machine slowly fade away, giving rise to innovative collaboration.

Photo by Mateusz Kitka (Pexels)

As Artificial Intelligence (AI) continues to advance by leaps and bounds, it’s impossible to overlook the profound transformations that this technological revolution is imprinting on the professions of the future. A paradigm shift is underway, redefining not only the nature of work but also how we conceptualize collaboration between humans and machines.