Toggle light / dark theme

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor— Udio arrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

If you use the web for more than just browsing (that’s pretty much everyone), chances are you’ve had your fair share of “CAPTCHA rage,” the frustration stemming from trying to discern a marginally legible string of letters aimed at verifying that you are a human. CAPTCHA, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” was introduced to the Internet a decade ago and has seen widespread adoption in various forms — whether using letters, sounds, math equations, or images — even as complaints about their use continue.

A large-scale Stanford study a few years ago concluded that “CAPTCHAs are often difficult for humans.” It has also been reported that around 1 in 5 visitors will leave a website rather than complete a CAPTCHA.

A longstanding belief is that the inconvenience of using CAPTCHAs is the price we all pay for having secured websites. But there’s no escaping that CAPTCHAs are becoming harder for humans and easier for artificial intelligence programs to solve.

CRISPR was first discovered in bacteria as a defense mechanism, suggesting that nature hides a bounty of CRISPR components. For the past decade, scientists have screened different natural environments—for example, pond scum—to find other versions of the tool that could potentially increase its efficacy and precision. While successful, this strategy depends on what nature has to offer. Some benefits, such as a smaller size or greater longevity in the body, often come with trade-offs like lower activity or precision.

Rather than relying on evolution, can we fast-track better CRISPR tools with AI?

This week, Profluent, a startup based in California, outlined a strategy that uses AI to dream up a new universe of CRISPR gene editors. Based on large language models—the technology behind the popular ChatGPT—the AI designed several new gene-editing components.

Data shuttling can increase energy consumption anywhere from 3 to 10,000 times above what’s required for the actual computation, said Wang.

The chip was highly efficient when challenged with two speech recognition tasks. One, Google Speech Commands, is small but practical. Here, speed is key. The other, Librispeech, is a mammoth system that helps transcribe speech to text, taxing the chip’s ability to process massive amounts of data.

When pitted against conventional computers, the chip performed equally as accurately but finished the job faster and with far less energy, using less than a tenth of what’s normally required for some tasks.

Researchers at the University of Basel have developed a new method for calculating phase diagrams of physical systems that works similarly to ChatGPT. This artificial intelligence could even automate scientific experiments in the future.

A year and a half ago, ChatGPT was released, and ever since, there has been hardly anything that cannot be created with this new form of artificial intelligence: texts, images, videos, and even music. ChatGPT is based on so-called generative models, which, using a complex algorithm, can create something entirely new from known information.

A research team led by Professor Christoph Bruder at the University of Basel, together with colleagues at the Massachusetts Institute of Technology (MIT) in Boston, have now used a similar method to calculate phase diagrams of physical systems.

Creating robots to safely aid disaster victims is one challenge; executing flexible robot control that takes advantage of the material’s softness is another. The use of pliable soft materials to collaborate with humans and work in disaster areas has drawn much recent attention. However, controlling soft dynamics for practical applications has remained a significant challenge.

In collaboration with the University of Tokyo and Bridgestone Corporation, Kyoto University has now developed a method to control pneumatic artificial muscles, which are soft robotic actuators. Rich dynamics of these drive components can be exploited as a computational resource.

Artificial muscles control rich soft component dynamics by using them as a computational resource. (Image: MEDICAL FIG.)

Our approach to analyzing and mitigating future risks posed by advanced AI models.

Google DeepMind has consistently pushed the boundaries of AI, developing models that have transformed our understanding of what’s possible. We believe that AI technology on the horizon will provide society with invaluable tools to help tackle critical global challenges, such as climate change, drug discovery, and economic productivity. At the same time, we recognize that as we continue to advance the frontier of AI capabilities, these breakthroughs may eventually come with new risks beyond those posed by present-day models.

Today, we are introducing our Frontier Safety Framework — a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. Our Framework focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. It is designed to complement our alignment research, which trains models to act in accordance with human values and societal goals, and Google’s existing suite of AI responsibility and safety practices.