AI becomes the decoder to predict treatment response.
Believe it or not, some types of cancers can grow resistant to chemotherapy.
Deciphering when cancer might toughen up against chemotherapy is pretty tricky. Even though researchers and doctors notice some hints and clues about resistance, predicting the exact moment is a bit like trying to hit a bullseye with a blindfold.
But in what could be a game-changer, scientists at the University of California San Diego School of Medicine revealed today in a study that a high-tech machine learning tool might just figure out when cancer is going to give the cold shoulder to chemotherapy.
The system combines trajectory optimization and enhanced reinforcement learning to improve how ANYmal chooses its leg positions and footholds.
In the field of robotics, the development of legged robots stands as a formidable challenge. The dynamic and agile movements observed in animals prove difficult to replicate through conventional human-made methods.
Researchers at ETH Zurich have now used an innovative control framework that has helped an autonomous robot, ANYmal, to traverse challenging terrains seamlessly.
Negotiating surfaces ranging from staircases to foam blocks and rugged terrain, this robotic quadruped demonstrates a newfound agility and adaptability, showcasing the effectiveness of its upgraded control system.
The Galaxy S24 will also have a new feature called Circle to Search, which will let users search anything on their screen using Google. Users can press the bottom edge of the screen, where the Google logo and a search bar will pop up, and draw a circle around anything they want to search. The feature will work on most content, except for those protected by DRM or screenshots, such as banking apps. Once the selection is made, a panel will slide up showing the selection and the results from Google’s Search Generative Experience (SGE), similar to image search via Google or Lens, but without needing to open another app or take screenshots. Users will be able to circle items in YouTube videos, Instagram Stories, and more.
The Galaxy S24 will also benefit from Google’s Imagen 2, a text-to-image model that can generate realistic images from text descriptions. Imagen 2 will power the photo editing features in the Galaxy S24 Gallery app, such as the Generative Edit feature which also debuted on the Pixel 8 series. It can automatically fill in missing parts of images based on the surrounding context. Imagen 2 was unveiled at Google I/O last year and recently launched in preview on the web.
Nice to see successes in deep learning moving beyond transformers. This one is more accurate and scales much better without ridiculous memory requirements.
The incredible explosion in the power of artificial intelligence is evident in daily headlines proclaiming big breakthroughs. What are the remaining differences between machine and human intelligence? Could we simulate a brain on current computer hardware if we could write the software? What are the latest advancements in the world’s largest brain model? Participate in the discussion about what AI has done and how far it has yet to go, while discovering new technologies that might allow it to get there.
ABOUT THE SPEAKERS
CHRIS ELIASMITH is the Director of the Centre for Theoretical Neuroscience (CTN) at the University of Waterloo. The CTN brings together researchers across many faculties who are interested in computational and theoretical models of neural systems. Dr Eliasmith was recently elected to the new Royal Society of Canada College of New Scholars, Artists and Scientists, one of only 90 Canadian academics to receive this honour. He is also a Canada Research Chair in Theoretical Neuroscience. His book, ‘How to build a brain’ (Oxford, 2013), describes the Semantic Pointer Architecture for constructing large-scale brain models. His team built what is currently the world’s largest functional brain model, ‘Spaun’, and the first to demonstrate realistic behaviour under biological constraints. This ground-breaking work was published in Science (November, 2012) and has been featured by CNN, BBC, Der Spiegel, Popular Science, National Geographic and CBC among many other media outlets, and was awarded the NSERC Polayni Prize for 2015.
PAUL THAGARD is a philosopher, cognitive scientist, and author of many interdisciplinary books. He is Distinguished Professor Emeritus of Philosophy at the University of Waterloo, where he founded and directed the Cognitive Science Program. He is a graduate of the Universities of Saskatchewan, Cambridge, Toronto (PhD in philosophy) and Michigan (MS in computer science). He is a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council has awarded him a Molson Prize (2007) and a Killam Prize (2013). His books include: The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change (MIT Press, 2012); The Brain and the Meaning of Life (Princeton University Press, 2010); Hot Thought: Mechanisms and Applications of Emotional Cognition (MIT Press, 2006); and Mind: Introduction to Cognitive Science (MIT Press, 1996; second edition, 2005). Oxford University Press will publish his 3-book Treatise on Mind and Society in early 2019.
Date/Time: Wednesday, October 17, 2018 — 7:30pm. Location: Vanstone Lecture Hall, St. Jerome’s University Academic Centre.