Toggle light / dark theme

Neuromorphic computers perform computations by emulating the human brain1. Akin to the human brain, they are extremely energy efficient in performing computations2. For instance, while CPUs and GPUs consume around 70–250 W of power, a neuromorphic computer such as IBM’s TrueNorth consumes around 65 mW of power, (i.e., 4–5 orders of magnitude less power than CPUs and GPUs)3. The structural and functional units of neuromorphic computation are neurons and synapses, which can be implemented on digital or analog hardware and can have different architectures, devices, and materials in their implementations4. Although there are a wide variety of neuromorphic computing systems, we focus our attention on spiking neuromorphic systems composed of these neurons and synapses. Spiking neuromorphic hardware implementations include Intel’s Loihi5, SpiNNaker26, BrainScales27, TrueNorth3, and DYNAPS8. These characteristics are crucial for the energy efficiency of neuromorphic computers. For the purposes of this paper, we define neuromorphic computing as any computing paradigm (theoretical, simulated, or hardware) that performs computations by emulating the human brain by using neurons and synapses to communicate with binary-valued signals (also known as spikes).

Neuromorphic computing is primarily used in machine learning applications, almost exclusively by leveraging spiking neural networks (SNNs)9. In recent years, however, it has also been used in non-machine learning applications such as graph algorithms, Boolean linear algebra, and neuromorphic simulations10,11,12. Researchers have also shown that neuromorphic computing is Turing-complete (i.e., capable of general-purpose computation)13. This ability to perform general-purpose computations and potentially use orders of magnitude less energy in doing so is why neuromorphic computing is poised to be an indispensable part of the energy-efficient computing landscape in the future.

Neuromorphic computers are seen as accelerators for machine learning tasks by using SNNs. To perform any other operation (e.g., arithmetic, logical, relational), we still resort to CPUs and GPUs because no good neuromorphic methods exist for these operations. These general-purpose operations are important for preprocessing data before it is transferred to a neuromorphic processor. In the current neuromorphic workflow— preprocessing on CPU/GPU and inferencing on neuromorphic processor—more than 99% of the time is spent in data transfer (see Table 7). This is highly inefficient and can be avoided if we do the preprocessing on the neuromorphic processor. Devising neuromorphic approaches for performing these preprocessing operations would drastically reduce the cost of transferring data between a neuromorphic computer and CPU/GPU. This would enable performing all types of computation (preprocessing as well as inferencing) efficiently on low-power neuromorphic computers deployed on the edge.

Superconducting quantum technology has long promised to bridge the divide between existing electronic devices and the delicate quantum landscape beyond. Unfortunately progress in making critical processes stable has stagnated over the past decade.

Now a significant step forward has finally been realized, with researchers from the University of Maryland making superconducting qubits that last 10 times longer than before.

What makes qubits so useful in computing is the fact their quantum properties entangle in ways that are mathematically handy for making short work of certain complex algorithms, taking moments to solve select problems that would take other technology decades or more.

Restoring And Extending The Capabilities Of The Human Brain — Dr. Behnaam Aazhang, Ph.D. — Director, Rice Neuroengineering Initiative, Rice University


Dr. Behnaam Aazhang, Ph.D. (https://aaz.rice.edu/) is the J.S. Abercrombie Professor, Electrical and Computer Engineering, and Director, Rice Neuroengineering Initiative (NEI — https://neuroengineering.rice.edu/), Rice University, where he has broad research interests including signal and data processing, information theory, dynamical systems, and their applications to neuro-engineering, with focus areas in (i) understanding neuronal circuits connectivity and the impact of learning on connectivity, (ii) developing minimally invasive and non-invasive real-time closed-loop stimulation of neuronal systems to mitigate disorders such as epilepsy, Parkinson, depression, obesity, and mild traumatic brain injury, (iii) developing a patient-specific multisite wireless monitoring and pacing system with temporal and spatial precision to restore the healthy function of a diseased heart, and (iv) developing algorithms to detect, predict, and prevent security breaches in cloud computing and storage systems.

Dr. Aazhang received his B.S. (with highest honors), M.S., and Ph.D. degrees in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign in 1981, 1983, and 1986, respectively. From 1981 to 1985, he was a Research Assistant in the Coordinated Science Laboratory, University of Illinois. In August 1985, he joined the faculty of Rice University. From 2006 till 2014, he held an Academy of Finland Distinguished Visiting Professorship appointment (FiDiPro) at the University of Oulu, Oulu, Finland.

As Nvidia’s recent surge in market capitalization clearly demonstrates, the AI industry is in desperate need of new hardware to train large language models (LLMs) and other AI-based algorithms. While server and HPC GPUs may be worthless for gaming, they serve as the foundation for data centers and supercomputers that perform highly parallelized computations necessary for these systems.

When it comes to AI training, Nvidia’s GPUs have been the most desirable to date. In recent weeks, the company briefly achieved an unprecedented $1 trillion market capitalization due to this very reason. However, MosaicML now emphasizes that Nvidia is just one choice in a multifaceted hardware market, suggesting companies investing in AI should not blindly spend a fortune on Team Green’s highly sought-after chips.

The AI startup tested AMD MI250 and Nvidia A100 cards, both of which are one generation behind each company’s current flagship HPC GPUs. They used their own software tools, along with the Meta-backed open-source software PyTorch and AMD’s proprietary software, for testing.

New research by the University of Liverpool could signal a step change in the quest to design the new materials that are needed to meet the challenge of net zero and a sustainable future.

Published in the journal Nature, Liverpool researchers have shown that a mathematical algorithm can guarantee to predict the structure of any material just based on knowledge of the atoms that make it up.

Developed by an interdisciplinary team of researchers from the University of Liverpool’s Departments of Chemistry and Computer Science, the algorithm systematically evaluates entire sets of possible structures at once, rather than considering them one at a time, to accelerate identification of the correct solution.

Recent progress in AI has been startling. Barely a week’s gone by without a new algorithm, application, or implication making headlines. But OpenAI, the source of much of the hype, only recently completed their flagship algorithm, GPT-4, and according to OpenAI CEO Sam Altman, its successor, GPT-5, hasn’t begun training yet.

It’s possible the tempo will slow down in coming months, but don’t bet on it. A new AI model as capable as GPT-4, or more so, may drop sooner than later.

This week, in an interview with Will Knight, Google DeepMind CEO Demis Hassabis said their next big model, Gemini, is currently in development, “a process that will take a number of months.” Hassabis said Gemini will be a mashup drawing on AI’s greatest hits, most notably DeepMind’s AlphaGo, which employed reinforcement learning to topple a champion at Go in 2016, years before experts expected the feat.

AI applications are summarizing articles, writing stories and engaging in long conversations — and large language models are doing the heavy lifting.

A large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other forms of content based on knowledge gained from massive datasets.

Large language models are among the most successful applications of transformer models. They aren’t just for teaching AIs human languages, but for understanding proteins, writing software code, and much, much more.

Join top executives in San Francisco on July 11–12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More

New products like ChatGPT have captivated the public, but what will the actual money-making applications be? Will they offer sporadic business success stories lost in a sea of noise, or are we at the start of a true paradigm shift? What will it take to develop AI systems that are actually workable?

To chart AI’s future, we can draw valuable lessons from the preceding step-change advance in technology: the Big Data era.