Also some stories from my childhood. Art as a live service. I’m wrong a lot so maybe I’m wrong about this stuff.
Music: “Un coin tranquille — Instrumental Version” by Nono feat. Anat Moshkovski. I got it on Artlist (like most of the music on the channel) which is a royalty free library that I understand pays their artists pretty well.
AI is being used to generate everything from images to text to artificial proteins, and now another thing has been added to the list: speech. Last week researchers from Microsoft released a paper on a new AI called VALL-E that can accurately simulate anyone’s voice based on a sample just three seconds long. VALL-E isn’t the first speech simulator to be created, but it’s built in a different way than its predecessors—and could carry a greater risk for potential misuse.
Most existing text-to-speech models use waveforms (graphical representations of sound waves as they move through a medium over time) to create fake voices, tweaking characteristics like tone or pitch to approximate a given voice. VALL-E, though, takes a sample of someone’s voice and breaks it down into components called tokens, then uses those tokens to create new sounds based on the “rules” it already learned about this voice. If a voice is particularly deep, or a speaker pronounces their A’s in a nasal-y way, or they’re more monotone than average, these are all traits the AI would pick up on and be able to replicate.
The model is based on a technology called EnCodec by Meta, which was just released this part October. The tool uses a three-part system to compress audio to 10 times smaller than MP3s with no loss in quality; its creators meant for one of its uses to be improving the quality of voice and music on calls made over low-bandwidth connections.
Summary: Researchers explore why some songs constantly get stuck in our heads and why these “hooks” are the guiding principle for modern popular music.
Source: University of Wollongong.
“Hey, I just met you, and this is crazy… But here’s my number, so call me, maybe.”
Contents: 00:00 — Lunar Glass & Lunar Vehicle (Music: Lunar City) 05:01-HEXATRACK-Space Express Concept (Music: Constellation) 10:30 — Mars Glass, Dome City, and Martian Terra Forming (Music: Martian) 13:54 — Beyond — Proxima Centauri, Tau Cet e, TRAPPIST-I system, and beyond (Music: Neptune) HEXATRACK-Space Express Concept, designed and created by Yosuke A. Yamashiki, Kyoto University. Lunar Glass & Mars Glass, designed and created by Takuya Ono, Kajima Co. Ltd. Visual Effect and detailed design are generated by Juniya Okamura. Concept Advisor Naoko Yamazaki, AstronautSIC Human Spaceology Center, GSAIS, Kyoto UniversityVR of Lunar&Mars Glass — created by Natsumi Iwato and Mamiko Hikita, Kyoto University. VR contents of Lunar&Mars Glass by Shinji Asano, Natsumi Iwato, Mamiko Hikita and Junya Okamura. Daidaros concept by Takuya Ono. Terraformed Mars were designed by Fuka Takagi & Yosuke A. Yamashiki. Exoplanet image were created by Ryusuke Kuroki, Fuka Takagi, Hiroaki Sato, Ayu Shiragashi and Y. A. Yamashiki. All Music (” Lunar City” “Constellation”“Martian”“Neptune”) are composed and played by Yosuke Alexandre Yamashiki.
Summary: Combining neuroimaging and EEG data, researchers recorded the neural activity of people while listening to a piece of music. Using machine learning technology, the data was translated to reconstruct and identify the specific piece of music the test subjects were listening to.
Source: University of Essex.
A new technique for monitoring brain waves can identify the music someone is listening to.
Sign up for a Curiosity Stream subscription and also get a free Nebula subscription (the streaming platform built by creators) here: https://curiositystream.com/isaacarthur. The Omega Point Cosmological Theory argues that it may be possible to continue existing until the End of Time itself.
Experience the beauty of AI-generated creativity with “Fantasy Lost,” an original composition by Raymond Miller. Witness the power of Artificial Intelligence as it “humanizes” the performance, bringing a unique and compelling twist. Accompanied by artwork of beautiful Tolkienesque elven women rendered by Stable Diffusion 2.1 and a helpful scrolling score and lighted keyboard for anyone who wishes to play the piece, this video is a short jaunt through an otherworldly musical and visual odyssey. Join us at Creative AI channel and explore the endless possibilities of AI in art and music.
A new study led by scientists at Rutgers University has uncovered new insights into the underlying brain mechanisms of autism spectrum disorder.
Autism Spectrum Disorder (ASD) is a complex developmental disorder that affects how a person communicates and interacts with others. It is characterized by difficulty with social communication and interaction, as well as repetitive behaviors and interests. ASD can range from mild to severe, and individuals with ASD may have a wide range of abilities and challenges. It is a spectrum disorder because the symptoms and characteristics of ASD can vary widely from person to person. Some people with ASD are highly skilled in certain areas, such as music or math, while others may have significant learning disabilities.
Visit https://brilliant.org/isaacarthur/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. A day may come when our technology permits vast prosperity for everyone, with robots and other automation producing plenty, but if that day never comes, what will life be like?