Toggle light / dark theme

Computers have come so far in terms of their power and potential, rivaling and even eclipsing human brains in their ability to store and crunch data, make predictions and communicate. But there is one domain where human brains continue to dominate: energy efficiency.

“The most efficient computers are still approximately four orders of magnitude — that’s 10,000 times — higher in energy requirements compared to the human brain for specific tasks such as image processing and recognition, although they outperform the brain in tasks like mathematical calculations,” said UC Santa Barbara electrical and computer engineering Professor Kaustav Banerjee, a world expert in the realm of nanoelectronics. “Making computers more energy efficient is crucial because the worldwide energy consumption by on-chip electronics stands at #4 in the global rankings of nation-wise energy consumption, and it is increasing exponentially each year, fueled by applications such as artificial intelligence.” Additionally, he said, the problem of energy inefficient computing is particularly pressing in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”

Neuromorphic computing has emerged as a promising way to bridge the energy efficiency gap. By mimicking the structure and operations of the human brain, where processing occurs in parallel across an array of low power-consuming neurons, it may be possible to approach brain-like energy efficiency.

Since the most advanced nodes in silicon are reaching the limits of planar integration, 2D materials could help to advance the semiconductor industry. With the potential for use in multifunctional chips, 2D materials offer combined logic, memory and sensing in integrated 3D chips.

The standard theory of cosmology—which says 95% of the universe is made up of unknown stuff we can’t see—has passed its strictest test yet. The first results released from an instrument designed to study the cosmic effects of mysterious dark energy confirm that, to the nearest 1%, the universe has evolved over the past 11 billion years just as theorists have predicted.

The findings, presented today in a series of talks at the American Physical Society meeting in Sacramento, California, and the Moriond meeting in Italy, as well as in a set of preprints posted to arXiv, come from the Dark Energy Spectroscopic Instrument (DESI), which has logged more than 6 million galaxies in deep space to construct the largest 3D map of the universe yet compiled.

“It’s a tremendous instrument and a major result,” says Eric Gawiser, a cosmologist at Rutgers University who was not involved with the work. “The universe DESI is finding is very sensible, with tantalizing hints of a more interesting one.”

Computer vision, one of the major areas of artificial intelligence, focuses on enabling machines to interpret and understand visual data. This field encompasses image recognition, object detection, and scene understanding. Researchers continuously strive to improve the accuracy and efficiency of neural networks to tackle these complex tasks effectively. Advanced architectures, particularly Convolutional Neural Networks (CNNs), play a crucial role in these advancements, enabling the processing of high-dimensional image data.

One major challenge in computer vision is the substantial computational resources required by traditional CNNs. These networks often rely on linear transformations and fixed activation functions to process visual data. While effective, this approach demands many parameters, leading to high computational costs and limiting scalability. Consequently, there’s a need for more efficient architectures that maintain high performance while reducing computational overhead.

Current methods in computer vision typically use CNNs, which have been successful due to their ability to capture spatial hierarchies in images. These networks apply linear transformations followed by non-linear activation functions, which help learn complex patterns. However, the significant parameter count in CNNs poses challenges, especially in resource-constrained environments. Researchers aim to find innovative solutions to optimize these networks, making them more efficient without compromising accuracy.