Toggle light / dark theme

Engaging in music throughout your life is associated with better brain health in older age, according to a new study published by experts at the University of Exeter.

Scientists working on PROTECT, an online study open to people aged 40 and over, reviewed data from more than a thousand adults over the age of 40 to see the effect of playing a musical instrument—or singing in a choir—on brain health. Over 25,000 people have signed up for the PROTECT study, which has been running for 10 years.

The team reviewed participants’ musical experience and lifetime exposure to music, alongside results of cognitive testing, to determine whether musicality helps to keep the brain sharp in later life.

A network-theory model, tested on the work of Johann Sebastian Bach, offers tools for quantifying the amount of information delivered to a listener by a musical piece.

Great pieces of music transport the audience on emotional journeys and tell stories through their melodies, harmonies, and rhythms. But can the information contained in a piece, as well as the piece’s effectiveness at communicating it, be quantified? Researchers at the University of Pennsylvania have developed a framework, based on network theory, for carrying out these quantitative assessments. Analyzing a large body of work by Johan Sebastian Bach, they show that the framework could be used to categorize different kinds of compositions on the basis of their information content [1]. The analysis also allowed them to pinpoint certain features in music compositions that facilitate the communication of information to listeners. The researchers say that the framework could lead to new tools for the quantitative analysis of music and other forms of art.

To tackle complex systems such as musical pieces, the team turned to network theory—which offers powerful tools to understand the behavior of discrete, interconnected units, such as individuals during a pandemic or nodes in an electrical power grid. Researchers have previously attempted to analyze the connections between musical notes using network-theory tools. Most of these studies, however, ignore an important aspect of communication: the flawed nature of perception. “Humans are imperfect learners,” says Suman Kulkarni, who led the study. The model developed by the team incorporated this aspect through the description of a fuzzy process through which a listener derives an “inferred” network of notes from the “true” network of the original piece.

Google has launched Gemini, a new artificial intelligence system that can seemingly understand and speak intelligently about almost any kind of prompt—pictures, text, speech, music, computer code, and much more.

This type of AI system is known as a multimodal model. It’s a step beyond just being able to handle text or images like previous algorithms. And it provides a strong hint of where AI may be going next: being able to analyze and respond to real-time information from the outside world.

Although Gemini’s capabilities might not be quite as advanced as they seemed in a viral video, which was edited from carefully curated text and still-image prompts, it is clear that AI systems are rapidly advancing. They are heading towards the ability to handle more and more complex inputs and outputs.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The world’s largest music label has yanked its artists’ music off TikTok Universal Music Group claims TikTok is unwilling to compensate musicians appropriately. (The Guardian) + Taylor Swift fans are kicking off. (Wired $) + Indie record labels don’t like the sound of Apple’s pay plans either. (FT $)