Toggle light / dark theme

“Frank Sinatra’s voice has been used on a version of the hip-hop song ”Gangsta’s Paradise,” while Johnny Cash’s has been deployed on the pop single ”Barbie Girl.” A YouTube user called PluggingAI offers songs imitating the voices of the deceased rappers Tupac and Notorious B.I.G.

”An artist’s voice is often the most valuable part of their livelihood and public persona, and to steal it, no matter the means, is wrong,” Universal Music general counsel Jeffrey Harleston told US lawmakers last month.

Discussions between Google and Universal Music are at an early stage, and no product launch is imminent, but the goal is to develop a tool for fans to create these tracks legitimately, and pay the owners of the copyrights for it, said people close to the situation. Artists would have the choice to opt in, the people said.


The discussions, confirmed by four people familiar with the matter, aim to strike a partnership for an industry that is grappling with the implications of new AI technology.

HBP researchers from Germany performed detailed cytoarchitectonic mapping of distinct areas in a human cortical region called frontal operculum and, using connectivity modelling, linked the areas to a variety of different functions including sexual sensation, muscle coordination as well as music and language processing.

The study contributes to the further unravelling of the relationship of the human brain’s structure with function, and is the first proof-of-concept of structural and functional connectivity analysis of the frontal operculum. The newly identified cytoarchitectonic areas have been made publicly available as part of the Julich-Brain Atlas on the EBRAINS platform, inviting for future research to further characterise this brain region.

Based on cell-body stained histological sections in ten postmortem brains (five females and five males), HBP researchers from Heinrich Heine University Düsseldorf and Research Centre Jülich identified three new areas in the frontal operculum: Op5, Op6 and Op7. Each of these areas had a distinct cytoarchitecture. Connectivity modelling showed that each area could be ascribed a distinct functional role.

Summary: A recent study reveals that everyday pleasures like music and coffee can significantly enhance cognitive performance.

Utilizing groundbreaking brain-monitoring technology, the study examined brain activity during cognitive tests under various stimulants, including music, coffee, and perfume. Results indicated increased “beta band” brain wave activity, linked to peak cognitive performance, when subjects engaged with music or consumed coffee.

AI-generated music, in particular, showed significant performance boosts, opening new avenues for exploration.

Can AI capture the emotion that a singer today can convey, or dupe us into believing they’re not human? Can Ronnie James Dio’s voice be brought back from the dead? In this episode of The Singing Hole, we explore where AI’s technology is today, how creators are harnessing the technology and how we can better prepare for the eventual future with music.

WE HAVE MERCH! Check-out the full line-up here: http://thecharismaticmerch.com.

🎧 Elizabeth’s favorite headphones 🎧 : https://imp.i114863.net/zayoEM

Music Gear Questions? 🎤 See my list of recommendations: https://imp.i114863.net/yRyGoV

On Wednesday, Meta announced it is open-sourcing AudioCraft, a suite of generative AI tools for creating music and audio from text prompts. With the tools, content creators can input simple text descriptions to generate complex audio landscapes, compose melodies, or even simulate entire virtual orchestras.

AudioCraft consists of three core components: AudioGen, a tool for generating various audio effects and soundscapes; MusicGen, which can create musical compositions and melodies from descriptions; and EnCodec, a neural network-based audio compression codec.

In an art world conquered by Artificial Intelligence, Claire takes a final stand against the new status quo.

A sci-fi short film created with the help of AI.

#shorts #shortfilm #ai #action #artificalintelligence #animation #midjourney #stablediffusion #aiart #runaway #generativeart #scifi #fantasy #noir #filmmaking #art #storytelling #videomaking #aigenerated #innovation #cinematography #virtualreality #contentcreation #filmproduction #aitools #sciencefiction #film #free #movie #digital #future #creativity #cinema #new #dystopian #drama #runway #pikalabs

Meta has released AudioCraft, a new open-source generative AI framework that can produce music from simple text prompts. AudioCraft is based on a dynamic framework that enables high-quality, realistic audio and music generation from text-based user inputs. It aims to revolutionize music generation by empowering professional musicians to explore new compositions, indie game developers to enhance their virtual worlds with sound effects, and small business owners to add soundtracks to their Instagram posts, all with ease.

AudioCraft is based on a dynamic framework that enables high-quality, realistic audio and music generation from text-based user inputs. It aims to revolutionize music generation by empowering professional musicians to explore new compositions, indie game developers to enhance their virtual worlds with sound effects, and small business owners to add soundtracks to their Instagram posts, all with ease.

AudioCraft is a collection of three robust models: MusicGen, AudioGen and EnCodec. While MusicGen uses text-based user inputs to generate music, AudioGen performs a similar role for ambient sounds. Both are trained with Meta-owned and specifically licensed music and public sound effects, respectively. A recent release from the company offers an improved version of EnCodec. This decoder allows for high-quality music generation with fewer artifacts, based on the pre-trained AudioGen and all AudioCraft model weights and code.

Great for if you forget but I’m sure they’re gonna make a dystopian movie or few about this. I don’t remember the TV series or maybe it was a movie I saw a trailer for but in it they had a thing similar to TSA where as you go from city to city etc the AI or computer reads your mind and if you had any rebellious or offensive thoughts towards the government they’d arrest you. Scary huh! I myself have joked that if con is the opposite of pro the Congress is the opposite of progress. Haha.


By examining a person’s brain activity, artificial intelligence (AI) can produce a song that matches the genre, rhythm, mood and instrumentation of music that the individual recently heard.

Scientists have previously “reconstructed” other sounds from brain activity, such as human speech, bird song and horse whinnies. However, few studies have attempted to recreate music from brain signals.

The day is fast approaching when generative AI won’t only write and create images in a convincingly human-like style, but compose music and sounds that pass for a professional’s work, too. This morning, Meta announced AudioCraft, a framework to generate what it describes as “high-quality,” “realistic” audio and music from short text descriptions, or prompts. It’s not Meta’s first foray into audio generation — the tech giant open sourced an AI-powered music generator, MusicGen, in June — but Meta claims that it’s made advances that vastly improve the quality of AI-generated sounds, such as dogs barking, cars honking and footsteps on a wooden floor.

In a blog post shared with TechCrunch, Meta… More.