Toggle light / dark theme

Neuromorphic nanoelectronic materials

Memristive and nanoionic devices have recently emerged as leading candidates for neuromorphic computing architectures. While top-down fabrication based on conventional bulk materials has enabled many early neuromorphic devices and circuits, bottom-up approaches based on low-dimensional nanomaterials have shown novel device functionality that often better mimics a biological neuron. In addition, the chemical, structural and compositional tunability of low-dimensional nanomaterials coupled with the permutational flexibility enabled by van der Waals heterostructures offers significant opportunities for artificial neural networks. In this Review, we present a critical survey of emerging neuromorphic devices and architectures enabled by quantum dots, metal nanoparticles, polymers, nanotubes, nanowires, two-dimensional layered materials and van der Waals heterojunctions with a particular emphasis on bio-inspired device responses that are uniquely enabled by low-dimensional topology, quantum confinement and interfaces. We also provide a forward-looking perspective on the opportunities and challenges of neuromorphic nanoelectronic materials in comparison with more mature technologies based on traditional bulk electronic materials.

Navigating the labyrinth: How AI tackles complex data sampling

Generative models have had remarkable success in various applications, from image and video generation to composing music and to language modeling. The problem is that we are lacking in theory, when it comes to the capabilities and limitations of generative models; understandably, this gap can seriously affect how we develop and use them down the line.

One of the main challenges has been the ability to effectively pick samples from complicated data patterns, especially given the limitations of traditional methods when dealing with the kind of high-dimensional and commonly encountered in modern AI applications.

Now, a team of scientists led by Florent Krzakala and Lenka Zdeborová at EPFL has investigated the efficiency of modern neural network-based generative models. The study, published in PNAS, compares these contemporary methods against traditional sampling techniques, focusing on a specific class of probability distributions related to spin glasses and statistical inference problems.

New computational model of real neurons could lead to better AI

Nearly all the neural networks that power modern artificial intelligence tools such as ChatGPT are based on a 1960s-era computational model of a living neuron. A new model developed at the Flatiron Institute’s Center for Computational Neuroscience (CCN) suggests that this decades-old approximation doesn’t capture all the computational abilities that real neurons possess and that this older model is potentially holding back AI development.

Atomicarchitects/Symmetry-Breaking-Discovery

Building symmetry breaking into neural networks.

Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution.

Rui Wang, Elyssa Hofgard, Han Gao, Robin Walters, Tess Smidt MIT June 2024 https://openreview.net/forum?id=59oXyDTLJv.

Modeling symmetry breaking is key…


Contribute to atomicarchitects/Symmetry-Breaking-Discovery development by creating an account on GitHub.

Conscious AI

Plus, Turing also showed that achieving universality doesn’t require anything fancy. The basic equipment of a universal machine is just not more advanced than a kid’s abacus — operations like incrementing, decrementing, and conditional jumping are all it takes to create software of any complexity: be it a calculator, Minecraft, or an AI chatbot.

Likewise, consciousness might just be an emergent property of the software running AGI, much like how the hardware of a universal machine gives rise to its capabilities. Personally, I don’t buy into the idea of something sitting on top of the physical human brain — no immortal soul or astral “I” floating around in higher dimensions. It’s all just flesh and bone. Think of it like an anthill: this incredibly complex system doesn’t need some divine spirit to explain its organized society, impressive architecture, or mushroom farms. The anthill’s intricate behaviour, often referred to as a superorganism, emerges from the interactions of its individual ants without needing to be reduced to them. Similarly, a single ant wandering around in a terrarium won’t tell you much about the anthill as a whole. Brain neurons are like those ants — pretty dumb on their own, but get around 86 billion of them together, and suddenly you’ve got “I” with all its experiences, dreams, and… consciousness.

So basically, if something can think, it can also think about itself. That means consciousness is a natural part of thinking — it just comes with the territory. And if you think about it, this also means you can’t really have thinking without consciousness, which brings us back to the whole Skynet thing.

/* */