Menu

Blog

Page 626

Mar 24, 2024

Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing

Posted by in categories: information science, particle physics, robotics/AI

Bayesian neural networks (BNNs) combine the generalizability of deep neural networks (DNNs) with a rigorous quantification of predictive uncertainty, which mitigates overfitting and makes them valuable for high-reliability or safety-critical applications. However, the probabilistic nature of BNNs makes them more computationally intensive on digital hardware and so far, less directly amenable to acceleration by analog in-memory computing as compared to DNNs. This work exploits a novel spintronic bit cell that efficiently and compactly implements Gaussian-distributed BNN values. Specifically, the bit cell combines a tunable stochastic magnetic tunnel junction (MTJ) encoding the trained standard deviation and a multi-bit domain-wall MTJ device independently encoding the trained mean. The two devices can be integrated within the same array, enabling highly efficient, fully analog, probabilistic matrix-vector multiplications. We use micromagnetics simulations as the basis of a system-level model of the spintronic BNN accelerator, demonstrating that our design yields accurate, well-calibrated uncertainty estimates for both classification and regression problems and matches software BNN performance. This result paves the way to spintronic in-memory computing systems implementing trusted neural networks at a modest energy budget.

The powerful ability of deep neural networks (DNNs) to generalize has driven their wide proliferation in the last decade to many applications. However, particularly in applications where the cost of a wrong prediction is high, there is a strong desire for algorithms that can reliably quantify the confidence in their predictions (Jiang et al., 2018). Bayesian neural networks (BNNs) can provide the generalizability of DNNs, while also enabling rigorous uncertainty estimates by encoding their parameters as probability distributions learned through Bayes’ theorem such that predictions sample trained distributions (MacKay, 1992). Probabilistic weights can also be viewed as an efficient form of model ensembling, reducing overfitting (Jospin et al., 2022). In spite of this, the probabilistic nature of BNNs makes them slower and more power-intensive to deploy in conventional hardware, due to the large number of random number generation operations required (Cai et al., 2018a).

Mar 24, 2024

Scalable Optimal Transport Methods in Machine Learning: A Contemporary Survey

Posted by in categories: mathematics, robotics/AI

Nice figures in this newly published survey on Scaled Optimal Transport with 200+ references.

👉


Optimal Transport (OT) is a mathematical framework that first emerged in the eighteenth century and has led to a plethora of methods for answering many theoretical and applied questions. The last decade has been a witness to the remarkable contributions of this classical optimization problem to machine learning. This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport. We provide a comprehensive survey of optimal transport while ensuring an accessible presentation as permitted by the nature of the topic and the context. First, we explain the optimal transport background and introduce different flavors (i.e. mathematical formulations), properties, and notable applications.

Mar 24, 2024

Beyond cloning: Harnessing the power of virtual quantum broadcasting

Posted by in category: quantum physics

In a new study, scientists propose the concept of “virtual quantum broadcasting,” which provides a workaround to the longstanding no-cloning theorem, thereby offering new possibilities for the transmission of quantum information.

Mar 24, 2024

Nvidia Is Simulating a Copy of the Earth

Posted by in categories: climatology, economics, sustainability

Chipmaker Nvidia has shown off a clone of our entire planet that could help meteorologists simulate and visualize global weather patterns at an “unprecedented scale,” according to a press release.

The “Earth climate digital twin,” dubbed Earth-2, was designed to help recoup some of the economic losses caused by climate change-driven extreme weather.

Customers can access the digital twin through an API, allowing “virtually any user to create AI-powered emulations to speed delivery of interactive, high-resolution simulations ranging from the global atmosphere and local cloud cover to typhoons and turbulence.”

Mar 24, 2024

New technique converts excess renewable energy to natural gas

Posted by in categories: biological, chemistry, sustainability

Four Lawrence Livermore National Laboratory (LLNL) researchers have partnered with Los Angeles-based SoCalGas and Munich, Germany-based Electrochaea to develop an electrobioreactor to allow excess renewable electricity from wind and solar sources to be stored in chemical bonds as renewable natural gas.

When renewable electricity supply exceeds demand, electric-utility operators intentionally curtail production of renewable electricity to avoid overloading the grid. In 2020, in California, more than 1.5 million megawatt hours of renewable electricity were curtailed, enough to power more than 100,000 households for a full year.

This practice also occurs in other countries. The team’s electrobioreactor uses the renewable electricity to convert water into hydrogen and oxygen. The microbes then use the hydrogen to convert carbon dioxide into methane, which is a major component of natural gas. Methane can then be moved around in natural gas pipelines and can be stored indefinitely, allowing the renewable energy to be recovered when it is most needed.

Mar 24, 2024

Silicon Nano wires and their applications

Posted by in category: futurism

Shared with Dropbox.

Mar 24, 2024

Probabilistic Neural Computing with Stochastic Devices

Posted by in categories: information science, robotics/AI

The brain has effectively proven a powerful inspiration for the development of computing architectures in which processing is tightly integrated with memory, communication is event-driven, and analog computation can be performed at scale. These neuromorphic systems increasingly show an ability to improve the efficiency and speed of scientific computing and artificial intelligence applications. Herein, it is proposed that the brain’s ubiquitous stochasticity represents an additional source of inspiration for expanding the reach of neuromorphic computing to probabilistic applications. To date, many efforts exploring probabilistic computing have focused primarily on one scale of the microelectronics stack, such as implementing probabilistic algorithms on deterministic hardware or developing probabilistic devices and circuits with the expectation that they will be leveraged by eventual probabilistic architectures. A co-design vision is described by which large numbers of devices, such as magnetic tunnel junctions and tunnel diodes, can be operated in a stochastic regime and incorporated into a scalable neuromorphic architecture that can impact a number of probabilistic computing applications, such as Monte Carlo simulations and Bayesian neural networks. Finally, a framework is presented to categorize increasingly advanced hardware-based probabilistic computing technologies.

Keywords: magnetic tunnel junctions; neuromorphic computing; probabilistic computing; stochastic computing; tunnel diodes.

© 2022 The Authors. Advanced Materials published by Wiley-VCH GmbH.

Mar 24, 2024

Emerging Artificial Neuron Devices for Probabilistic Computing

Posted by in categories: biological, finance, information science, robotics/AI

Probabilistic computing with stochastic devices.


In recent decades, artificial intelligence has been successively employed in the fields of finance, commerce, and other industries. However, imitating high-level brain functions, such as imagination and inference, pose several challenges as they are relevant to a particular type of noise in a biological neuron network. Probabilistic computing algorithms based on restricted Boltzmann machine and Bayesian inference that use silicon electronics have progressed significantly in terms of mimicking probabilistic inference. However, the quasi-random noise generated from additional circuits or algorithms presents a major challenge for silicon electronics to realize the true stochasticity of biological neuron systems. Artificial neurons based on emerging devices, such as memristors and ferroelectric field-effect transistors with inherent stochasticity can produce uncertain non-linear output spikes, which may be the key to make machine learning closer to the human brain. In this article, we present a comprehensive review of the recent advances in the emerging stochastic artificial neurons (SANs) in terms of probabilistic computing. We briefly introduce the biological neurons, neuron models, and silicon neurons before presenting the detailed working mechanisms of various SANs. Finally, the merits and demerits of silicon-based and emerging neurons are discussed, and the outlook for SANs is presented.

Keywords: brain-inspired computing, artificial neurons, stochastic neurons, memristive devices, stochastic electronics.

Continue reading “Emerging Artificial Neuron Devices for Probabilistic Computing” »

Mar 24, 2024

Cerebras Systems Unveils World’s Fastest AI Chip with Whopping 4 Trillion Transistors

Posted by in categories: robotics/AI, supercomputing

Third Generation 5 nm Wafer Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes.

SUNNYVALE, CALIFORNIA – March 13,202 4 – Cerebras Systems, the pioneer in accelerating generative AI, has doubled down on its existing world record of fastest AI chip with the introduction of the Wafer Scale Engine 3. The WSE-3 delivers twice the performance of the previous record-holder, the Cerebr as WSE-2, at the same power draw and for the same price. Purpose built for training the industry’s largest AI models, the 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI perform ance through 900,000 AI optimized compute cores.

Key Specs:

Mar 24, 2024

Sequences in the ‘Dark Genome’ Could be Used to Diagnose Cancer Earlier

Posted by in category: biotech/medical

The human genome is primarily composed of long stretches of repeat nucleotides that do not code for protein (only about two percent of the human genomes does code for protein). This mysterious, non-protein-coding DNA was once disregarded as junk DNA, but scientists have begun to find sequences of importance within this ‘junk,’ which is now sometimes called genomic ‘dark matter.’ Some of these sequences appear to have important regulatory functions, and can control the expression of some protein-coding genes. But studying these sequences can be extremely challenging, particularly because they are not like protein-coding genes that can be studied with standard techniques.

But scientists have now found a great use for the dark genome. Reporting in Science Translational Medicine, researchers created a method to reveal elements of the dark genome in cancerous tissue and in the bloodstream, as fragments called cell-free DNA (cfDNA). These bits of DNA are lost from tumors and they move around the body in the bloodstream. This technique may eventually help scientists or clinicians identify cancer or monitor the progress of treatment.

Page 626 of 11,480First623624625626627628629630Last