Toggle light / dark theme

Actor and investor Ashton Kutcher Lauds OpenAI’s Video Model Sora

Basically confirmed whats been said here. AI will take over film making by 2029/2030.


Actor and entrepreneur Ashton Kutcher lauded OpenAI’s generative AI video tool Sora at a recent event. “I’ve been playing around with Sora, this latest thing that OpenAI launched that generates video,” Kutcher said. “I have a beta version of it, and it’s pretty amazing. Like, it’s pretty good.”

He explained that users specify the shot, scene, or trailer they desire. “It invents that thing. You haven’t shot any footage, the people in it don’t exist,” he said.

According to Kutcher, Sora can create realistic 10–15 second videos, despite still making minor mistakes and lacking a full understanding of physics. However he said that the technology has come a long way in just one year, with significant leaps in quality and realism.

Colonizing Ganymede

Go to https://hensonshaving.com/isaacarthur and enter “isaacarthur” at checkout to get 100 free blades with your purchase. Ganymede is an enormous moon, larger than any other we’ve found, including our own, and may one day be the centerpiece of wider human settlements around Jupiter. Visit our Website: http://www.isaacarthur.net Join Nebula: https://go.nebula.tv/isaacarthur Support us on Patreon: / isaacarthur Support us on Subscribestar: https://www.subscribestar.com/isaac-a… Group: / 1,583,992,725,237,264 Reddit: / isaacarthur Twitter: / isaac_a_arthur on Twitter and RT our future content. SFIA Discord Server: / discord Credits: Colonizing Ganymede Episode 449; May 30, 2024 Written, Produced & Narrated by: Isaac Arthur Graphics: Jeremy Jozwik Kristijan Tavcar Rapid Thrash Sergio Botero YD Visual Music Courtesy of Epidemic Sound Epidemic Sound http://epidemicsound.com/creator Stellardrone, “Ultra Deep Field”, “Red Giant”, “Billions and Billions”, “Cosmic Sunrise” Lombus, “Hydrogen Sonata”, “Cosmic Soup”

The Frequencies of Cognition: Exploring How Our Brains Differentiate Sounds

A study shows our brains use basic sound rates and patterns to distinguish music from speech, offering insights to enhance therapies for speech impairments like aphasia.

Music and speech are among the most frequent types of sounds we hear. But how do we identify what we think are differences between the two?

An international team of researchers mapped out this process through a series of experiments—yielding insights that offer a potential means to optimize therapeutic programs that use music to regain the ability to speak in addressing aphasia. This language disorder afflicts more than 1 in 300 Americans each year, including Wendy Williams and Bruce Willis.

How Scientists Took a Picture of the Big Bang

Scientists “took a picture” of the Big Bang by capturing the cosmic microwave background (CMB) radiation, which is like the afterglow of the Big Bang. They used satellites like the Cosmic Background Explorer (COBE) and the Planck spacecraft to measure this ancient light. These instruments detected faint microwave signals that have been traveling through space for about 13.8 billion years. By analyzing these signals, scientists created a detailed map of the early universe, showing tiny temperature fluctuations. This “picture” helps us understand the universe’s origins and how it has evolved over time. #brightside Credit: Galaxy Cluster Abell: NASA Hubble — https://flic.kr/p/2e8LH2d, CC BY 2.0 https://creativecommons.org/licenses/.…, https://commons.wikimedia.org/wiki/Fi… Cosmic Microwave: ESA and the Planck Collaboration, CC BY 4.0 https://creativecommons.org/licenses/.…, https://commons.wikimedia.org/wiki/Fi… NASA’s Goddard Space Flight Center Animation is created by Bright Side.

Music from TheSoul Sound: https://thesoul-sound.com/ Check our Bright Side podcast on Spotify and leave a positive review! https://open.spotify.com/show/0hUkPxD… Subscribe to Bright Side: https://goo.gl/rQTJZz.

Our Social Media: Facebook: / brightside Instagram: / brightside.official TikTok: https://www.tiktok.com/@brightside.of… Stock materials (photos, footages and other): https://www.depositphotos.com https://www.shutterstock.com https://www.eastnews.ru.

For more videos and articles visit: http://www.brightside.me

Alternative Intelligence: The Other A.I.

Go to https://hensonshaving.com/isaacarthur and enter “isaacarthur” at checkout to get 100 free blades with your purchase. We think very highly of the human brain, after all, it’s what lets us think about anything in the first place, but Nature is vast, and our primate brains are not the end-all and be-all of neural engineering. Join this channel to get access to perks: / @isaacarthursfia Visit our Website: http://www.isaacarthur.net Join Nebula: https://go.nebula.tv/isaacarthur Support us on Patreon: / isaacarthur Support us on Subscribestar: https://www.subscribestar.com/isaac-a… Group: / 1,583,992,725,237,264 Reddit: / isaacarthur Twitter: / isaac_a_arthur on Twitter and RT our future content. SFIA Discord Server: / discord Credits: Alternative Intelligence: The Other A.I. Episode 448; May 23, 2024 Produced & Narrated by: Isaac Arthur Written by: Erik Eldritch & Isaac Arthur Editor: Darius Said Music Courtesy of Epidemic Sound http://epidemicsound.com/creator Stellardrone, “Red Giant”, “Ultra Deep Field”, “Cosmic Sunrise” Sergey Cheremisinov, “Labyrinth”, “Forgotten Stars” Taras Harkavyi, “Alpha and…” Miguel Johnson, “So Many Stars”

This Interactive Museum Concept Lets You Experience Sound Paintings

In February, we covered a curious setup that allows you to control digital particles in real time by blowing air into a sensor, created by visual artist Steven Mark Kübler. But what if you had a whole museum of such unusual interactive artworks?

This is what Kübler has been wondering as well. He presented a concept of a room filled with sound paintings manipulated with viewers’ actions. You can see him blowing on the sensor to make particles in a tank-like container splash and part like water. The experience was created using Arduino’s controller and the TouchDesigner visual development platform.

Beyond Imagination: AI, Art And The Ownership Dilemma

Aaron Vick is a multi-x founder, former CEO, best-selling author, process and workflow nerd and early-stage/growth advisor focused on Web3.

The age of artificial intelligence (AI) is transforming the landscape of creativity, challenging our understanding of creator rights and digital identity. As AI becomes an integral part of the creative process, collaborating with human minds to push the boundaries of imagination and innovation, we find ourselves in a new era that demands reevaluating the essence of authorship.

This AI renaissance is not just about the tools we use to create; it is about the fundamental shift in how we perceive and value creativity. In a world where AI can generate art, music and literature that rivals the works of human creators, we must reconsider what it means to be an author, an artist or a creator. The lines between human and machine creativity are blurring, giving rise to new forms of expression and collaboration that were once unimaginable.

AI Can Now Generate Entire Songs on Demand. What Does This Mean for Music as We Know It?

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor— Udio arrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

/* */