Toggle light / dark theme

Since the most advanced nodes in silicon are reaching the limits of planar integration, 2D materials could help to advance the semiconductor industry. With the potential for use in multifunctional chips, 2D materials offer combined logic, memory and sensing in integrated 3D chips.

The standard theory of cosmology—which says 95% of the universe is made up of unknown stuff we can’t see—has passed its strictest test yet. The first results released from an instrument designed to study the cosmic effects of mysterious dark energy confirm that, to the nearest 1%, the universe has evolved over the past 11 billion years just as theorists have predicted.

The findings, presented today in a series of talks at the American Physical Society meeting in Sacramento, California, and the Moriond meeting in Italy, as well as in a set of preprints posted to arXiv, come from the Dark Energy Spectroscopic Instrument (DESI), which has logged more than 6 million galaxies in deep space to construct the largest 3D map of the universe yet compiled.

“It’s a tremendous instrument and a major result,” says Eric Gawiser, a cosmologist at Rutgers University who was not involved with the work. “The universe DESI is finding is very sensible, with tantalizing hints of a more interesting one.”

Computer vision, one of the major areas of artificial intelligence, focuses on enabling machines to interpret and understand visual data. This field encompasses image recognition, object detection, and scene understanding. Researchers continuously strive to improve the accuracy and efficiency of neural networks to tackle these complex tasks effectively. Advanced architectures, particularly Convolutional Neural Networks (CNNs), play a crucial role in these advancements, enabling the processing of high-dimensional image data.

One major challenge in computer vision is the substantial computational resources required by traditional CNNs. These networks often rely on linear transformations and fixed activation functions to process visual data. While effective, this approach demands many parameters, leading to high computational costs and limiting scalability. Consequently, there’s a need for more efficient architectures that maintain high performance while reducing computational overhead.

Current methods in computer vision typically use CNNs, which have been successful due to their ability to capture spatial hierarchies in images. These networks apply linear transformations followed by non-linear activation functions, which help learn complex patterns. However, the significant parameter count in CNNs poses challenges, especially in resource-constrained environments. Researchers aim to find innovative solutions to optimize these networks, making them more efficient without compromising accuracy.

Researchers at University of Michigan have developed a method to produce artificially grown miniature brains—called human brain organoids—free of animal cells that could greatly improve the way neurodegenerative conditions are studied and, eventually, treated.

Over the last decade of researching , scientists have explored the use of as an alternative to mouse models. These self-assembled, 3D tissues derived from embryonic or more closely model the complex structure compared to conventional two-dimensional cultures.

Until now, the engineered network of proteins and molecules that give structure to the cells in , known as extracellular matrices, often used a substance derived from mouse sarcomas called Matrigel. That method suffers significant disadvantages, with a relatively undefined composition and batch-to-batch variability.

Researchers from Johns Hopkins Medicine and the National Institutes of Health’s National Institute on Aging say their study of 40 older adults with obesity and insulin resistance who were randomly assigned to either an intermittent fasting diet or a standard healthy diet approved by the U.S. Department of Agriculture (USDA) offers important clues about the potential benefits of both eating plans on brain health.