Toggle light / dark theme

The incredible explosion in the power of artificial intelligence is evident in daily headlines proclaiming big breakthroughs. What are the remaining differences between machine and human intelligence? Could we simulate a brain on current computer hardware if we could write the software? What are the latest advancements in the world’s largest brain model? Participate in the discussion about what AI has done and how far it has yet to go, while discovering new technologies that might allow it to get there.

ABOUT THE SPEAKERS

CHRIS ELIASMITH is the Director of the Centre for Theoretical Neuroscience (CTN) at the University of Waterloo. The CTN brings together researchers across many faculties who are interested in computational and theoretical models of neural systems. Dr Eliasmith was recently elected to the new Royal Society of Canada College of New Scholars, Artists and Scientists, one of only 90 Canadian academics to receive this honour. He is also a Canada Research Chair in Theoretical Neuroscience. His book, ‘How to build a brain’ (Oxford, 2013), describes the Semantic Pointer Architecture for constructing large-scale brain models. His team built what is currently the world’s largest functional brain model, ‘Spaun’, and the first to demonstrate realistic behaviour under biological constraints. This ground-breaking work was published in Science (November, 2012) and has been featured by CNN, BBC, Der Spiegel, Popular Science, National Geographic and CBC among many other media outlets, and was awarded the NSERC Polayni Prize for 2015.

PAUL THAGARD is a philosopher, cognitive scientist, and author of many interdisciplinary books. He is Distinguished Professor Emeritus of Philosophy at the University of Waterloo, where he founded and directed the Cognitive Science Program. He is a graduate of the Universities of Saskatchewan, Cambridge, Toronto (PhD in philosophy) and Michigan (MS in computer science). He is a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council has awarded him a Molson Prize (2007) and a Killam Prize (2013). His books include: The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change (MIT Press, 2012); The Brain and the Meaning of Life (Princeton University Press, 2010); Hot Thought: Mechanisms and Applications of Emotional Cognition (MIT Press, 2006); and Mind: Introduction to Cognitive Science (MIT Press, 1996; second edition, 2005). Oxford University Press will publish his 3-book Treatise on Mind and Society in early 2019.

Some technologies are so cool they make you do a double take. Case in point: robots being controlled by rat brains. Kevin Warwick, once a cyborg and still a researcher in cybernetics at the University of Reading, has been working on creating neural networks that can control machines. He and his team have taken the brain cells from rats, cultured them, and used them as the guidance control circuit for simple wheeled robots. Electrical impulses from the bot enter the batch of neurons, and responses from the cells are turned into commands for the device. The cells can form new connections, making the system a true learning machine. Warwick hasn’t released any new videos of the rat brain robot for the past few years, but the three older clips we have for you below are still awesome. He and his competitors continue to move this technology forward – animal cyborgs are real.

The skills of these rat-robot hybrids are very basic at this point. Mainly the neuron control helps the robot to avoid walls. Yet that obstacle avoidance often shows clear improvement over time, demonstrating how networks of neurons can grant simple learning to the machines. Whenever I watch the robots in the videos below I have to do a quick reality check – these machines are being controlled by biological cells! It’s simply amazing.

In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Jump to Chapters:

Computer design has always been inspired by biology, especially the brain. In this episode of Architecture All Access — Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab — explains the relationship of Neuromorphic Computing and understanding the principals of brain computations at the circuit level that are enabling next-generation intelligent devices and autonomous systems.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Discover the history and influence of the secrets that nature has evolved over a billion years supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Chapters:

Computer simulations of complex systems provide an opportunity to study their time evolution under user control. Simulations of neural circuits are an established tool in computational neuroscience. Through systematic simplification on spatial and temporal scales they provide important insights in the time evolution of networks which in turn leads to an improved understanding of brain functions like learning, memory or behavior. Simulations of large networks are exploiting the concept of weak scaling where the massively parallel biological network structure is naturally mapped on computers with very large numbers of compute nodes. However, this approach is suffering from fundamental limitations. The power consumption is approaching prohibitive levels and, more seriously, the bridging of time-scales from millisecond to years, present in the neurobiology of plasticity, learning and development is inaccessible to classical computers. In the keynote I will argue that these limitations can be overcome by extreme approaches to weak and strong scaling based on brain-inspired computing architectures.

Bio: Karlheinz Meier received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 15 year effort to design, build and operate an electronics data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson. Following scientific staff positions at DESY and CERN he was appointed full professor of physics at Heidelberg university in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA). Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as tools for neuroscience and generic methods for inference from large data volumes.

A newly invented fuel cell taps into naturally present, and ubiquitous microbes in the soil to generate power.

This soil-powered device, about the size of a regular paperback book, offers a viable alternative to batteries in underground sensors used for precision agriculture.

Northwestern University highlighted the durability of its robust fuel cell, showcasing its ability to withstand various environmental conditions, including both arid soil and flood-prone areas.

Interactions between RNA and proteins are the cornerstone of many important biological processes from transcription and translation to gene regulation, yet little is known about the ancient origin of said interactions. We hypothesized that peptide amyloids played a role in the origin of life and that their repetitive structure lends itself to building interfaces with other polymers through avidity. Here, we report that short RNA with a minimum length of three nucleotides binds in a sequence-dependent manner to peptide amyloids. The 3′–5′ linked RNA backbone appears to be well-suited to support these interactions, with the phosphodiester backbone and nucleobases both contributing to the affinity. Sequence-specific RNA–peptide interactions of the kind identified here may provide a path to understanding one of the great mysteries rooted in the origin of life: the origin of the genetic code.

The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

This article is part of a special issue on consciousness in humanoid robots. The purpose of this article is to summarize the attention schema theory (AST) of consciousness for those in the engineering or artificial intelligence community who may not have encountered previous papers on the topic, which tended to be in psychology and neuroscience journals. The central claim of this article is that AST is mechanistic, demystifies consciousness and can potentially provide a foundation on which artificial consciousness could be engineered. The theory has been summarized in detail in other articles (e.g., Graziano and Kastner, 2011; Webb and Graziano, 2015) and has been described in depth in a book (Graziano, 2013). The goal here is to briefly introduce the theory to a potentially new audience and to emphasize its possible use for engineering artificial consciousness.

The AST was developed beginning in 2010, drawing on basic research in neuroscience, psychology, and especially on how the brain constructs models of the self (Graziano, 2010, 2013; Graziano and Kastner, 2011; Webb and Graziano, 2015). The main goal of this theory is to explain how the brain, a biological information processor, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is in the realm of science and engineering.