Toggle light / dark theme

All proteins are composed of chains of amino acids, which generally fold up into compact globules with specific shapes. The folding process is governed by interactions between the different amino acids—for example, some of them carry electrical charges—so the sequence determines the structure. Because the structure in turn defines a protein’s function, deducing a protein’s structure is vital for understanding many processes in molecular biology, as well as for identifying drug molecules that might bind to and alter a protein’s activity.

Protein structures have traditionally been determined by experimental methods such as x-ray crystallography and electron microscopy. But researchers have long wished to be able to predict a structure purely from its sequence—in other words, to understand and predict the process of protein folding.

For many years, computational methods such as molecular dynamics simulations struggled with the complexity of that problem. But AlphaFold bypassed the need to simulate the folding process. Instead, the algorithm could be trained to recognize correlations between sequence and structure in known protein structures and then to generalize those relationships to predict unknown structures.

Now in Quantum: by Antonio deMarti iOlius, Patricio Fuentes, Román Orús, Pedro M. Crespo, and Josu Etxezarreta Martinez https://doi.org/10.22331/q-2024-10-10-1498


Antonio deMarti iOlius1, Patricio Fuentes2, Román Orús3,4,5, Pedro M. Crespo1, and Josu Etxezarreta Martinez1

1Department of Basic Sciences, Tecnun — University of Navarra, 20,018 San Sebastian, Spain. 2 Photonic Inc., Vancouver, British Columbia, Canada. 3 Multiverse Computing, Pio Baroja 37, 20008 San Sebastián, Spain 4 Donostia International Physics Center, Paseo Manuel de Lardizabal 4, 20018 San Sebastián, Spain 5 IKERBASQUE, Basque Foundation for Science, Plaza Euskadi 5, 48009 Bilbao, Spain.

Get full text pdfRead on arXiv VanityComment on Fermat’s library.

A team of engineers at AI inference technology company BitEnergy AI reports a method to reduce the energy needs of AI applications by 95%. The group has published a paper describing their new technique on the arXiv preprint server.

As AI applications have gone mainstream, their use has risen dramatically, leading to a notable rise in energy needs and costs. LLMs such as ChatGPT require a lot of computing power, which in turn means a lot of electricity is needed to run them.

As just one example, ChatGPT now requires roughly 564 MWh daily, or enough to power 18,000 American homes. As the science continues to advance and such apps become more popular, critics have suggested that AI applications might be using around 100 TWh annually in just a few years, on par with Bitcoin mining operations.

Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience “catastrophic forgetting” when taught additional tasks: They can successfully learn the new assignments, but “forget” how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.

Biological brains, on the other hand, are remarkably flexible. Humans and animals can easily learn how to play a new game, for instance, without having to re-learn how to walk and talk.

Inspired by the flexibility of human and animal brains, Caltech researchers have now developed a new type of that enables neural networks to be continuously updated with new data that they are able to learn from without having to start from scratch. The algorithm, called a functionally invariant path (FIP) algorithm, has wide-ranging applications from improving recommendations on online stores to fine-tuning self-driving cars.

Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience “catastrophic forgetting” when taught additional tasks: They can successfully learn the new assignments, but “forget” how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.

Biological brains, on the other hand, are remarkably flexible. Humans and animals can easily learn how to play a new game, for instance, without having to re-learn how to walk and talk.

Inspired by the flexibility of human and animal brains, Caltech researchers have now developed a new type of algorithm that enables neural networks to be continuously updated with new data that they are able to learn from without having to start from scratch. The algorithm, called a functionally invariant path (FIP) algorithm, has wide-ranging applications from improving recommendations on online stores to fine-tuning self-driving cars.

Quantum state tomography plays a fundamental role in characterizing and evaluating the quality of quantum states produced by quantum devices. It serves as a crucial element in the advancement of quantum hardware and software, regardless of the underlying physical implementation and potential applications1,2,3. However, reconstructing the full quantum state becomes prohibitively expensive for large-scale quantum systems that exhibit potential quantum advantages4,5, as the number of measurements required increases exponentially with system size.

Recent protocols try to solve this challenge through two main steps: efficient parameterization of quantum states and utilization of carefully designed measurement schemes and classical data postprocessing algorithms. For one-dimensional (1D) systems with area law entanglement, the matrix product state (MPS)6,7,8,9,10,11,12 provides a compressed representation. It requires only a polynomial number of parameters that can be determined from local or global measurement results. Two iterative algorithms using local measurements, singular value thresholding (SVT)13 and maximum likelihood (ML)14, have been demonstrated in trapped-ion quantum simulators with up to 14 qubits15. However, SVT is limited to pure states and thus impractical for noisy intermediate-scale quantum (NISQ) systems. Meanwhile, although ML can handle mixed states represented as matrix product operators (MPOs)16,17, it suffers from inefficient classical data postprocessing.

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.