Toggle light / dark theme

AI just got 100-fold more energy efficient

Forget the cloud.

Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.

With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.

Rewiring the Brain: The Neural Code of Traumatic Memories

Summary: Unveiling the neurological enigma of traumatic memory formation, researchers harnessed innovative optical and machine-learning methodologies to decode the brain’s neuronal networks engaged during trauma memory creation.

The team identified a neural population encoding fear memory, revealing the synchronous activation and crucial role of the dorsal part of the medial prefrontal cortex (dmPFC) in associative fear memory retrieval in mice.

Groundbreaking analytical approaches, including the ‘elastic net’ machine-learning algorithm, pinpointed specific neurons and their functional connectivity within the spatial and functional fear-memory neural network.

Meta shows how to reduce hallucinations in ChatGPT & Co with prompt engineering

When ChatGPT & Co. have to check their answers themselves, they make fewer mistakes, according to a new study by Meta.

ChatGPT and other language models repeatedly reproduce incorrect information — even when they have learned the correct information. There are several approaches to reducing hallucination. Researchers at Meta AI now present Chain-of-Verification (CoVe), a prompt-based method that significantly reduces this problem.

New method relies on self-verification of the language model.

Elevating neuromorphic computing using laser-controlled filaments in vanadium dioxide

In a new Science Advances study, scientists from the University of Science and Technology of China have developed a dynamic network structure using laser-controlled conducting filaments for neuromorphic computing.

Neuromorphic computing is an emerging field of research that draws inspiration from the to create efficient and intelligent computer systems. At its core, relies on , which are computational models inspired by the neurons and synapses in the brain. But when it comes to creating the hardware, it can be a bit challenging.

Mott materials have emerged as suitable candidates for neuromorphic computing due to their unique transition properties. Mott transition involves a rapid change in electrical conductivity, often accompanied by a transition between insulating and metallic states.

Nanoelectronic device performs real-time AI classification without relying on the cloud

Forget the cloud. Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.

With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time and near-instant diagnostics.

To test the concept, engineers used the device to classify large amounts of information from publicly available electrocardiogram (ECG) datasets. Not only could the device efficiently and correctly identify an irregular heartbeat, it also was able to determine the arrhythmia subtype from among six different categories with near 95% accuracy.

50 exaFLOPS supercomputer planned for 2025

Chip designer Tachyum has accepted a major purchase order from a U.S. company to build a new supercomputing system for AI. This will be based on its 5 nanometre (nm) “Prodigy” Universal Processor chip, delivering more than 50 exaFLOPS of performance.

Tachyum, founded in 2016 and headquartered in Santa Clara, California, claims to have developed a disruptive, ultra-low-power processor architecture that could revolutionise data centre, AI, and high-performance computing (HPC) markets.

New easy-to-use optical chip can self-configure to perform various functions

Researchers have developed an easy-to-use optical chip that can configure itself to achieve various functions. The positive real-valued matrix computation they have achieved gives the chip the potential to be used in applications requiring optical neural networks. Optical neural networks can be used for a variety of data-heavy tasks such as image classification, gesture interpretation and speech recognition.

Photonic integrated circuits that can be reconfigured after manufacturing to perform different functions have been developed previously. However, they tend to be difficult to configure because the user needs to understand the internal structure and principles of the chip and individually adjust its basic units.

“Our new chip can be treated as a black box, meaning users don’t need to understand its internal structure to change its function,” said research team leader Jianji Dong from Huazhong University of Science and Technology in China. “They only need to set a training objective, and, with computer control, the chip will self-configure to achieve the desired functionality based on the input and output.”

/* */