Toggle light / dark theme

Neuromorphic computers are based on intricate networks of simple, elementary processors (which act like the brain’s neurons and synapses). The main advantage of this is that these machines are inherently “parallel”.

This means that, as with neurons and synapses, virtually all the processors in a computer can potentially be operating simultaneously, communicating in tandem.

In addition, because the computations performed by individual neurons and synapses are very simple compared with traditional computers, the energy consumption is orders of magnitude smaller. Although neurons are sometimes thought of as processing units, and synapses as memory units, they contribute to both processing and storage. In other words, data is already located where the computation requires it.

At its Quantum Summit 2023, IBM took the stage with an interesting spirit: one of almost awe at having things go their way. But the quantum of today – the one that’s changing IBM’s roadmap so deeply on the back of breakthroughs upon breakthroughs – was hard enough to consolidate. As IBM sees it, the future of quantum computing will hardly be more permissive, and further improvements to the cutting-edge devices it announced at the event, the 133-qubit Heron Quantum Processing Unit (QPU), which is the company’s first utility-scale quantum processor, and the self-contained Quantum System Two, a quantum-specific supercomputing architecture, are ultimately required.

But each breakthrough that afterward becomes obsolete is another accelerational bump against what we might call quantum’s “plateau of understanding.” We’ve already been through this plateau with semiconductors, so much so that our latest CPUs and GPUs are reaching practical, fundamental design limits where quantum effects start ruining our math. Conquering the plateau means that utility and understanding are now enough for research and development to be somewhat self-sustainable – at least for a Moore’s-law-esque while.

Our brains are remarkably energy efficient.

Using just 20 watts of power, the human brain is capable of processing the equivalent of an exaflop — or a billion-billion mathematical operations per second.

Now, researchers in Australia are building what will be the world’s first supercomputer that can simulate networks at this scale.

ICYMI: DeepSouth uses a #neuromorphiccomputing system which mimics biological processes, using hardware to efficiently emulate large networks of spiking #neurons at 228 trillion #Synaptic operations per second — rivalling the estimated rate of operations in the human brain.


Australian researchers are putting together a supercomputer designed to emulate the world’s most efficient learning machine – a neuromorphic monster capable of the same estimated 228 trillion synaptic operations per second that human brains handle.

As the age of AI dawns upon us, it’s clear that this wild technological leap is one of the most significant in the planet’s history, and will very soon be deeply embedded in every part of our lives. But it all relies on absolutely gargantuan amounts of computing power. Indeed, on current trends, the AI servers NVIDIA sells alone will likely be consuming more energy annually than many small countries. In a world desperately trying to decarbonize, that kind of energy load is a massive drag.

But as often happens, nature has already solved this problem. Our own necktop computers are still the state of the art, capable of learning super quickly from small amounts of messy, noisy data, or processing the equivalent of a billion billion mathematical operations every second – while consuming a paltry 20 watts of energy.

Simulations of binary neutron star mergers suggest that future detectors will distinguish between different models of hot nuclear matter.

Researchers used supercomputer simulations to explore how neutron star mergers affect gravitational waves, finding a key relationship with the remnant’s temperature. This study aids future advancements in detecting and understanding hot nuclear matter.

Exploring neutron star mergers and gravitational waves.

The mini-brain functioned like both the central processing unit and memory storage of a supercomputer. It received input in the form of electrical zaps and outputted its calculations through neural activity, which was subsequently decoded by an AI tool.

When trained on soundbites from a pool of people—transformed into electrical zaps—Brainoware eventually learned to pick out the “sounds” of specific people. In another test, the system successfully tackled a complex math problem that’s challenging for AI.

The system’s ability to learn stemmed from changes to neural network connections in the mini-brain—which is similar to how our brains learn every day. Although just a first step, Brainoware paves the way for increasingly sophisticated hybrid biocomputers that could lower energy costs and speed up computation.

Researchers at Western Sydney University in Australia have teamed up with tech giants Intel and Dell to build a massive supercomputer intended to simulate neural networks at the scale of the human brain.

They say the computer, dubbed DeepSouth, is capable of emulating networks of spiking neurons at a mind-melting 228 trillion synaptic operations per second, putting it on par with the estimated rate at which the human brain completes operations.

The project was announced at this week’s NeuroEng Workshop hosted by Western Sydney’s International Centre for Neuromorphic Systems (ICNS), a forum for luminaries in the field of computational neuroscience.

📸 Look at this post on Facebook https://www.facebook.com/share/U5sBEHBUhndiJJDz/?mibextid=xfxF2i


In the realm of computing technology, there is nothing quite as powerful and complex as the human brain. With its 86 billion neurons and up to a quadrillion synapses, the brain has unparalleled capabilities for processing information. Unlike traditional computing devices with physically separated units, the brain’s efficiency lies in its ability to serve as both a processor and memory device. Recognizing the potential of harnessing the brain’s power, researchers have been striving to create more brain-like computing systems.

Efforts to mimic the brain’s activity in artificial systems have been ongoing, but progress has been limited. Even one of the most powerful supercomputers in the world, Riken’s K Computer, struggled to simulate just a fraction of the brain’s activity. With its 82,944 processors and a petabyte of main memory, it took 40 minutes to simulate just one second of the activity of 1.73 billion neurons connected by 10.4 trillion synapses. This represented only one to two percent of the brain’s capacity.

In recent years, scientists and engineers have delved into the realm of neuromorphic computing, which aims to replicate the brain’s structure and functionality. By designing hardware and algorithms that mimic the brain, researchers hope to overcome the limitations of traditional computing and improve energy efficiency. However, despite significant progress, neuromorphic computing still poses challenges, such as high energy consumption and time-consuming training of artificial neural networks.