Summary: Scientists unraveled how animals differentiate distinct scents, even those that seem remarkably similar.
While some neurons consistently identify differing smells, others respond unpredictably, aiding in distinguishing nuanced aromas over time. This discovery, inspired by previous research on fruit flies, could enhance machine-learning models.
By introducing variability, AI might mirror the discernment found in nature.
The convergence of Biotechnology, Neurotechnology, and Artificial Intelligence has major implications for the future of humanity. This talk explores the long-term opportunities inherent to these fields by surveying emerging breakthroughs and their potential applications. Whether we can enjoy the benefits of these technologies depends on us: Can we overcome the institutional challenges that are slowing down progress without exacerbating civilizational risks that come along with powerful technological progress?
About the speaker: Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. She advises companies and projects, such as Cosmica, and The Roots of Progress Fellowship, and is on the Executive Committee of the Biomarker Consortium. She holds an MS in Philosophy & Public Policy from the London School of Economics, focusing on AI Safety.
The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it’s working for us, not the other way around.
To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.
How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.
A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today’s dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical… More.
Architectures in neural networks commonly assume that inference is hierarchical. In this Perspective, Suzuki et al. present the shallow brain hypothesis, a neural processing mechanism based on neuroanatomical and electrophysiological evidence that intertwines hierarchical cortical processing with a massively parallel process to which subcortical areas substantially contribute.
The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.
The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.
“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.
IBM Research recently disclosed details about its NorthPole neural accelerator. This isn’t the first time IBM has discussed the part; IBM researcher Dr. Dharmendra Modha gave a presentation last month at Hot Chips that delved into some of its technical underpinnings.
Let’s take a high-level look at what IBM announced.
IBM NorthPole is an advanced AI chip from IBM Research that integrates processing units and memory on a single chip, significantly improving energy efficiency and processing speed for artificial intelligence tasks. It is designed for low-precision operations, making it suitable for a wide range of AI applications while eliminating the need for bulky cooling systems.
The researchers built a dynamic data acquisition platform to capture human arm motion during assembly tasks.
A team of researchers from the Beijing Institute of Technology has developed a new method to control robots that can assemble satellites in space. The technique is inspired by the human arm, which can adjust its damping to perform different tasks with precision and stability. The researchers published their findings in Cyborg and Bionic Systems.
Space operations with robots and challenges
Space operations require robots to interact with objects in complex and dynamic environments. However, traditional robot control methods have limitations in adapting to diverse and uncertain situations and are prone to vibration, which can cause assembly failure. To overcome these challenges, the researchers proposed a human-like variable admittance control method based on the variable damping characteristics of the human arm.
ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.
Since its launch on mobile devices in May of this year, ChatGPT’s downloads and revenue have continued to grow. In its first month, when the app was available on iOS only, it topped 3.9 million downloads, which grew to 15.1 million by June, according to an analysis of the AI app market by Apptopia. Then, following a slight dip in July, ChatGPT grew again to top 23 million downloads as of September 2023.
In addition, ChatGPT’s usage on mobile devices has similarly grown from just over 1.34 million monthly active users in May to now 38.88 million as of September.
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
Joy Buolamwini, the renowned AI researcher and activist, appears on the Zoom screen from home in Boston, wearing her signature thick-rimmed glasses.
As an MIT grad, she seems genuinely interested in seeing old covers of MIT Technology Review that hang in our London office. An edition of the magazine from 1961 asks: “Will your son get into college?”