Menu

Blog

Archive for the ‘robotics/AI’ category: Page 6

Mar 24, 2024

A collective AI via lifelong learning and sharing at the edge

Posted by in category: robotics/AI

An emerging research area in AI is developing multi-agent capabilities with collections of interacting AI systems. Andrea Soltoggio and colleagues develop a vision for combining such approaches with current edge computing technology and lifelong learning advances. The envisioned network of AI agents could quickly learn new tasks in open-ended applications, with individual AI agents independently learning and contributing to and benefiting from collective knowledge.

Mar 24, 2024

Cerebras Unveils CS-3 Wafer-Scale AI Chip With 900,000 Cores and 4 Trillion Transistors

Posted by in category: robotics/AI

OpenAI has apparently been demonstrating GPT-5, the next generation of its notorious large language model (LLM), to prospective buyers — and they’re very impressed with the merchandise.

“It’s really good, like materially better,” one CEO told Business Insider of the LLM. That same CEO added that in the demo he previewed, OpenAI tailored use cases and data modeling unique to his firm — and teased previously unseen capabilities as well.

According to BI, OpenAI is looking at a summer launch — though its sources say it’s still being trained and in need of “red-teaming,” the tech industry term for hiring hackers to try to exploit one’s wares.

Mar 24, 2024

Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing

Posted by in categories: information science, particle physics, robotics/AI

Bayesian neural networks (BNNs) combine the generalizability of deep neural networks (DNNs) with a rigorous quantification of predictive uncertainty, which mitigates overfitting and makes them valuable for high-reliability or safety-critical applications. However, the probabilistic nature of BNNs makes them more computationally intensive on digital hardware and so far, less directly amenable to acceleration by analog in-memory computing as compared to DNNs. This work exploits a novel spintronic bit cell that efficiently and compactly implements Gaussian-distributed BNN values. Specifically, the bit cell combines a tunable stochastic magnetic tunnel junction (MTJ) encoding the trained standard deviation and a multi-bit domain-wall MTJ device independently encoding the trained mean. The two devices can be integrated within the same array, enabling highly efficient, fully analog, probabilistic matrix-vector multiplications. We use micromagnetics simulations as the basis of a system-level model of the spintronic BNN accelerator, demonstrating that our design yields accurate, well-calibrated uncertainty estimates for both classification and regression problems and matches software BNN performance. This result paves the way to spintronic in-memory computing systems implementing trusted neural networks at a modest energy budget.

The powerful ability of deep neural networks (DNNs) to generalize has driven their wide proliferation in the last decade to many applications. However, particularly in applications where the cost of a wrong prediction is high, there is a strong desire for algorithms that can reliably quantify the confidence in their predictions (Jiang et al., 2018). Bayesian neural networks (BNNs) can provide the generalizability of DNNs, while also enabling rigorous uncertainty estimates by encoding their parameters as probability distributions learned through Bayes’ theorem such that predictions sample trained distributions (MacKay, 1992). Probabilistic weights can also be viewed as an efficient form of model ensembling, reducing overfitting (Jospin et al., 2022). In spite of this, the probabilistic nature of BNNs makes them slower and more power-intensive to deploy in conventional hardware, due to the large number of random number generation operations required (Cai et al., 2018a).

Mar 24, 2024

Scalable Optimal Transport Methods in Machine Learning: A Contemporary Survey

Posted by in categories: mathematics, robotics/AI

Nice figures in this newly published survey on Scaled Optimal Transport with 200+ references.

👉


Optimal Transport (OT) is a mathematical framework that first emerged in the eighteenth century and has led to a plethora of methods for answering many theoretical and applied questions. The last decade has been a witness to the remarkable contributions of this classical optimization problem to machine learning. This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport. We provide a comprehensive survey of optimal transport while ensuring an accessible presentation as permitted by the nature of the topic and the context. First, we explain the optimal transport background and introduce different flavors (i.e. mathematical formulations), properties, and notable applications.

Mar 24, 2024

Probabilistic Neural Computing with Stochastic Devices

Posted by in categories: information science, robotics/AI

The brain has effectively proven a powerful inspiration for the development of computing architectures in which processing is tightly integrated with memory, communication is event-driven, and analog computation can be performed at scale. These neuromorphic systems increasingly show an ability to improve the efficiency and speed of scientific computing and artificial intelligence applications. Herein, it is proposed that the brain’s ubiquitous stochasticity represents an additional source of inspiration for expanding the reach of neuromorphic computing to probabilistic applications. To date, many efforts exploring probabilistic computing have focused primarily on one scale of the microelectronics stack, such as implementing probabilistic algorithms on deterministic hardware or developing probabilistic devices and circuits with the expectation that they will be leveraged by eventual probabilistic architectures. A co-design vision is described by which large numbers of devices, such as magnetic tunnel junctions and tunnel diodes, can be operated in a stochastic regime and incorporated into a scalable neuromorphic architecture that can impact a number of probabilistic computing applications, such as Monte Carlo simulations and Bayesian neural networks. Finally, a framework is presented to categorize increasingly advanced hardware-based probabilistic computing technologies.

Keywords: magnetic tunnel junctions; neuromorphic computing; probabilistic computing; stochastic computing; tunnel diodes.

© 2022 The Authors. Advanced Materials published by Wiley-VCH GmbH.

Mar 24, 2024

Emerging Artificial Neuron Devices for Probabilistic Computing

Posted by in categories: biological, finance, information science, robotics/AI

Probabilistic computing with stochastic devices.


In recent decades, artificial intelligence has been successively employed in the fields of finance, commerce, and other industries. However, imitating high-level brain functions, such as imagination and inference, pose several challenges as they are relevant to a particular type of noise in a biological neuron network. Probabilistic computing algorithms based on restricted Boltzmann machine and Bayesian inference that use silicon electronics have progressed significantly in terms of mimicking probabilistic inference. However, the quasi-random noise generated from additional circuits or algorithms presents a major challenge for silicon electronics to realize the true stochasticity of biological neuron systems. Artificial neurons based on emerging devices, such as memristors and ferroelectric field-effect transistors with inherent stochasticity can produce uncertain non-linear output spikes, which may be the key to make machine learning closer to the human brain. In this article, we present a comprehensive review of the recent advances in the emerging stochastic artificial neurons (SANs) in terms of probabilistic computing. We briefly introduce the biological neurons, neuron models, and silicon neurons before presenting the detailed working mechanisms of various SANs. Finally, the merits and demerits of silicon-based and emerging neurons are discussed, and the outlook for SANs is presented.

Keywords: brain-inspired computing, artificial neurons, stochastic neurons, memristive devices, stochastic electronics.

Continue reading “Emerging Artificial Neuron Devices for Probabilistic Computing” »

Mar 24, 2024

Cerebras Systems Unveils World’s Fastest AI Chip with Whopping 4 Trillion Transistors

Posted by in categories: robotics/AI, supercomputing

Third Generation 5 nm Wafer Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes.

SUNNYVALE, CALIFORNIA – March 13,202 4 – Cerebras Systems, the pioneer in accelerating generative AI, has doubled down on its existing world record of fastest AI chip with the introduction of the Wafer Scale Engine 3. The WSE-3 delivers twice the performance of the previous record-holder, the Cerebr as WSE-2, at the same power draw and for the same price. Purpose built for training the industry’s largest AI models, the 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI perform ance through 900,000 AI optimized compute cores.

Key Specs:

Mar 24, 2024

Apptronik to integrate Apollo humanoid with NVIDIA general-purpose foundation model

Posted by in category: robotics/AI

Apptronik is working with NVIDIA’s Project GR00T to enable general-purpose humanoid robots to learn complex tasks.

Mar 24, 2024

What Is Holding Back Neuromorphic Computing?

Posted by in category: robotics/AI

We discuss the commercial readiness of spiking neural networks and the potential for spiking LLMs with Intel’s Mike Davies.

Mar 24, 2024

What is neuromorphic computing and how will it impact generative AI?

Posted by in category: robotics/AI

A GlobalData analyst suggests there are “limitless possibilities” when AI is modelled after the human brain via neuromorphic computing.

Page 6 of 2,165First345678910Last