Toggle light / dark theme

Since the most advanced nodes in silicon are reaching the limits of planar integration, 2D materials could help to advance the semiconductor industry. With the potential for use in multifunctional chips, 2D materials offer combined logic, memory and sensing in integrated 3D chips.

The standard theory of cosmology—which says 95% of the universe is made up of unknown stuff we can’t see—has passed its strictest test yet. The first results released from an instrument designed to study the cosmic effects of mysterious dark energy confirm that, to the nearest 1%, the universe has evolved over the past 11 billion years just as theorists have predicted.

The findings, presented today in a series of talks at the American Physical Society meeting in Sacramento, California, and the Moriond meeting in Italy, as well as in a set of preprints posted to arXiv, come from the Dark Energy Spectroscopic Instrument (DESI), which has logged more than 6 million galaxies in deep space to construct the largest 3D map of the universe yet compiled.

“It’s a tremendous instrument and a major result,” says Eric Gawiser, a cosmologist at Rutgers University who was not involved with the work. “The universe DESI is finding is very sensible, with tantalizing hints of a more interesting one.”

Computer vision, one of the major areas of artificial intelligence, focuses on enabling machines to interpret and understand visual data. This field encompasses image recognition, object detection, and scene understanding. Researchers continuously strive to improve the accuracy and efficiency of neural networks to tackle these complex tasks effectively. Advanced architectures, particularly Convolutional Neural Networks (CNNs), play a crucial role in these advancements, enabling the processing of high-dimensional image data.

One major challenge in computer vision is the substantial computational resources required by traditional CNNs. These networks often rely on linear transformations and fixed activation functions to process visual data. While effective, this approach demands many parameters, leading to high computational costs and limiting scalability. Consequently, there’s a need for more efficient architectures that maintain high performance while reducing computational overhead.

Current methods in computer vision typically use CNNs, which have been successful due to their ability to capture spatial hierarchies in images. These networks apply linear transformations followed by non-linear activation functions, which help learn complex patterns. However, the significant parameter count in CNNs poses challenges, especially in resource-constrained environments. Researchers aim to find innovative solutions to optimize these networks, making them more efficient without compromising accuracy.