The system combines trajectory optimization and enhanced reinforcement learning to improve how ANYmal chooses its leg positions and footholds.
In the field of robotics, the development of legged robots stands as a formidable challenge. The dynamic and agile movements observed in animals prove difficult to replicate through conventional human-made methods.
Researchers at ETH Zurich have now used an innovative control framework that has helped an autonomous robot, ANYmal, to traverse challenging terrains seamlessly.
Negotiating surfaces ranging from staircases to foam blocks and rugged terrain, this robotic quadruped demonstrates a newfound agility and adaptability, showcasing the effectiveness of its upgraded control system.
The Galaxy S24 will also have a new feature called Circle to Search, which will let users search anything on their screen using Google. Users can press the bottom edge of the screen, where the Google logo and a search bar will pop up, and draw a circle around anything they want to search. The feature will work on most content, except for those protected by DRM or screenshots, such as banking apps. Once the selection is made, a panel will slide up showing the selection and the results from Google’s Search Generative Experience (SGE), similar to image search via Google or Lens, but without needing to open another app or take screenshots. Users will be able to circle items in YouTube videos, Instagram Stories, and more.
The Galaxy S24 will also benefit from Google’s Imagen 2, a text-to-image model that can generate realistic images from text descriptions. Imagen 2 will power the photo editing features in the Galaxy S24 Gallery app, such as the Generative Edit feature which also debuted on the Pixel 8 series. It can automatically fill in missing parts of images based on the surrounding context. Imagen 2 was unveiled at Google I/O last year and recently launched in preview on the web.
Nice to see successes in deep learning moving beyond transformers. This one is more accurate and scales much better without ridiculous memory requirements.
The incredible explosion in the power of artificial intelligence is evident in daily headlines proclaiming big breakthroughs. What are the remaining differences between machine and human intelligence? Could we simulate a brain on current computer hardware if we could write the software? What are the latest advancements in the world’s largest brain model? Participate in the discussion about what AI has done and how far it has yet to go, while discovering new technologies that might allow it to get there.
ABOUT THE SPEAKERS
CHRIS ELIASMITH is the Director of the Centre for Theoretical Neuroscience (CTN) at the University of Waterloo. The CTN brings together researchers across many faculties who are interested in computational and theoretical models of neural systems. Dr Eliasmith was recently elected to the new Royal Society of Canada College of New Scholars, Artists and Scientists, one of only 90 Canadian academics to receive this honour. He is also a Canada Research Chair in Theoretical Neuroscience. His book, ‘How to build a brain’ (Oxford, 2013), describes the Semantic Pointer Architecture for constructing large-scale brain models. His team built what is currently the world’s largest functional brain model, ‘Spaun’, and the first to demonstrate realistic behaviour under biological constraints. This ground-breaking work was published in Science (November, 2012) and has been featured by CNN, BBC, Der Spiegel, Popular Science, National Geographic and CBC among many other media outlets, and was awarded the NSERC Polayni Prize for 2015.
PAUL THAGARD is a philosopher, cognitive scientist, and author of many interdisciplinary books. He is Distinguished Professor Emeritus of Philosophy at the University of Waterloo, where he founded and directed the Cognitive Science Program. He is a graduate of the Universities of Saskatchewan, Cambridge, Toronto (PhD in philosophy) and Michigan (MS in computer science). He is a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council has awarded him a Molson Prize (2007) and a Killam Prize (2013). His books include: The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change (MIT Press, 2012); The Brain and the Meaning of Life (Princeton University Press, 2010); Hot Thought: Mechanisms and Applications of Emotional Cognition (MIT Press, 2006); and Mind: Introduction to Cognitive Science (MIT Press, 1996; second edition, 2005). Oxford University Press will publish his 3-book Treatise on Mind and Society in early 2019.
Just after filming this video, Sam Altman, CEO of OpenAI published a blog post about the governance of superintelligence in which he, along with Greg Brockman and Ilya Sutskever, outline their thinking about how the world should prepare for a world with superintelligences. And just before filming Geoffrey Hinton quite his job at Google so that he could express more openly his concerns about the imminent arrival of an artificial general intelligence, an AGI that could soon get beyond our control if it became superintelligent. So, the basic idea is moving from sci-fi speculation into being a plausible scenario, but how powerful will they be and which of the concerns about superAI are reasonably founded? In this video I explore the ideas around superintelligence with Nick Bostrom’s 2014 book, Superintelligence, as one of our guides and Geoffrey Hinton’s interviews as another, to try to unpick which aspects are plausible and which are more like speculative sci-fi. I explore what are the dangers, such as Eliezer Yudkowsky’s notion of a rapid ‘foom’ take over of humanity, and also look briefly at the control problem and the alignment problem. At the end of the video I then make a suggestion for how we could maybe delay the arrival of superintelligence by withholding the ability of the algorithms to self-improve themselves, withholding what you could call, meta level agency.
▬▬ Chapters ▬▬
00:00 — Questing for an Infinity Gauntlet. 01:38 — Just human level AGI 02:27 — Intelligence explosion. 04:10 — Sparks of AGI 04:55 — Geoffrey Hinton is concerned. 06:14 — What are the dangers? 10:07 — Is ‘foom’ just sci-fi? 13:07 — Implausible capabilities. 14:35 — Plausible reasons for concern. 15:31 — What can we do? 16:44 — Control and alignment problems. 18:32 — Currently no convincing solutions. 19:16 — Delay intelligence explosion. 19:56 — Regulating meta level agency.