While a federal judge advanced an infringement claim against Stability AI, he dismissed the rest of the lawsuit.
In a new study, Deepmind and colleagues at Isomorphic Labs show early results from a new version of AlphaFold that brings fully automated structure prediction of biological molecules closer to reality.
The Google Deepmind AlphaFold and Isomorphic Labs team today unveiled the latest AlphaFold model. According to the companies, the updated model can now predict the structure of almost any molecule in the Protein Data Bank (PDB), often with atomic accuracy. This development, they say, is an important step towards a better understanding of the complex biological mechanisms within cells.
Since its launch in 2020, AlphaFold has influenced protein structure prediction worldwide. The latest version of the model goes beyond proteins to include a wide range of biologically relevant molecules such as ligands, nucleic acids and post-translational modifications. These structures are critical to understanding biological mechanisms in cells and have been difficult to predict with high accuracy, according to Deepmind.
Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”
Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.
When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.
Summary: Scientists unraveled how animals differentiate distinct scents, even those that seem remarkably similar.
While some neurons consistently identify differing smells, others respond unpredictably, aiding in distinguishing nuanced aromas over time. This discovery, inspired by previous research on fruit flies, could enhance machine-learning models.
By introducing variability, AI might mirror the discernment found in nature.
The convergence of Biotechnology, Neurotechnology, and Artificial Intelligence has major implications for the future of humanity. This talk explores the long-term opportunities inherent to these fields by surveying emerging breakthroughs and their potential applications. Whether we can enjoy the benefits of these technologies depends on us: Can we overcome the institutional challenges that are slowing down progress without exacerbating civilizational risks that come along with powerful technological progress?
About the speaker: Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. She advises companies and projects, such as Cosmica, and The Roots of Progress Fellowship, and is on the Executive Committee of the Biomarker Consortium. She holds an MS in Philosophy & Public Policy from the London School of Economics, focusing on AI Safety.
The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it’s working for us, not the other way around.
To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.
How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.
A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today’s dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical… More.
Architectures in neural networks commonly assume that inference is hierarchical. In this Perspective, Suzuki et al. present the shallow brain hypothesis, a neural processing mechanism based on neuroanatomical and electrophysiological evidence that intertwines hierarchical cortical processing with a massively parallel process to which subcortical areas substantially contribute.
The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.
The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.
“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.
IBM Research recently disclosed details about its NorthPole neural accelerator. This isn’t the first time IBM has discussed the part; IBM researcher Dr. Dharmendra Modha gave a presentation last month at Hot Chips that delved into some of its technical underpinnings.
Let’s take a high-level look at what IBM announced.
IBM NorthPole is an advanced AI chip from IBM Research that integrates processing units and memory on a single chip, significantly improving energy efficiency and processing speed for artificial intelligence tasks. It is designed for low-precision operations, making it suitable for a wide range of AI applications while eliminating the need for bulky cooling systems.