Toggle light / dark theme

As those who have read this column over time understand, I have a soapbox that involves authors, whether academics or consultants, pandering to management rather than teaching them. Sadly, Age of Invisible Machines.


The second, and larger issue was mentioned up top. Inventors have a habit, from long before Alfred Nobel, of ignoring the consequences of their inventions. The excuse is the same as scientists often give, that it’s not up to them to decide on the used and societal impact, they’re just discovering and inventing things. While that is true for theoretical science, it’s far past time for technologists focused on applications that directly impact society to give up that attempt to absolve themselves from societal impact.

The ethical AI movement is only an extension of regular movements in society, movements that try to understand how change impacts those societies and to do it from the beginning. Any good programmer looks at system issues from the design phase. Waiting until debugging is too late to create an effective system. Artificial intelligence will clearly impact society in major ways. It will redefine who can work and how society must address a change in the definition of work. It ties into the overvaluing of stocks because of the promise of solutions, in the lack of understanding of most people in what those solutions mean, and a real understanding, among a very few, of what that means.

Society is in a new Gilded Age. The first one led to regulations and protections that have been gutted over the last forty years. This new one has even more challenges and dangers than those that were created in the industrial revolutions. Nobody should be talking about systems with such an enormous potential impact on society without talking about those impacts. This book takes an outdated approach of ignoring society while pushing for major societal disruptions. That means I cannot recommend this book.

The emergence in the last week of a particularly effective voice synthesis machine learning model called VALL-E has prompted a new wave of concern over the possibility of deepfake voices made quick and easy — quickfakes, if you will. But VALL-E is more iterative than breakthrough, and the capabilities aren’t so new as you might think. Whether that means you should be more or less worried is up to you.

Voice replication has been a subject of intense research for years, and the results have been good enough to power plenty of startups, like WellSaid, Papercup and Respeecher. The latter is even being used to create authorized voice reproductions of actors like James Earl Jones. Yes: from now on Darth Vader will be AI generated.

VALL-E, posted on GitHub by its creators at Microsoft last week, is a “neural codec language model” that uses a different approach to rendering voices than many before it. Its larger training corpus and some new methods allow it to create “high-quality personalized speech” using just three seconds of audio from a target speaker.

Unlike James Webb, the Habitable World Observatory will be serviceable by robots in space.

NASA has revealed new details about the successor to the $10 billion James Webb Space Telescope. The multi-billion dollar Habitable World Observatory (HWO) will be tasked with searching for Earth-like exoplanets from space, and it is likely to launch at some point in the early 2040s. The new details came to light during this week’s meeting of the American Astronomical Society, as per a Science.


NASA Goddard Space Flight Center.

The multi-billion dollar Habitable World Observatory (HWO) will be tasked with searching for Earth-like exoplanets from space, and it is likely to launch at some point in the early 2040s. The new details came to light during this week’s meeting of the American Astronomical Society, as per a Science report.

Microsoft’s new voice-cloning AI can simulate a speaker’s voice with remarkable accuracy — and all it needs to get started is a three-second sample of them talking.

Voice cloning 101: Voice cloning isn’t new. Google the term, and you’ll get a long list of links to websites and apps offering to train an AI to produce audio that sounds just like you. You can then use the clone to hear yourself “read” any text you like.

For a writer, this can be useful for creating an author-narrated audio version of their book without spending days in a recording studio. A voice actor, meanwhile, might clone their voice so that they can rent out the AI for projects they don’t have time to tackle themselves.

Visit https://brilliant.org/isaacarthur/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription.
A day may come when our technology permits vast prosperity for everyone, with robots and other automation producing plenty, but if that day never comes, what will life be like?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/what-if-we-nev…t-scarcity.
Episode’s Narration-only version: https://soundcloud.com/isaac-arthur-148927746/what-if-we-nev…ation-only.

Credits:
What If We Never Become Post Scarcity?
Science & Futurism with Isaac Arthur.
Episode 377, January 12, 2023
Written, Produced & Narrated by Isaac Arthur.

Editors:
Briana Brownell.
David McFarlane.
Konstantin Sokerin.

Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.