Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, and business professionals.
Reinforcement learning is terrible — but everything else is worse.
Karpathy’s sharpest takes yet on AGI, RL, and the future of learning.
Andrej Karpathy’s vision of AGI isn’t a bang — it’s a gradient descent through human history.
Karpathy on AGI & Superintelligence.
* AGI won’t be a sudden singularity — it will blend into centuries of steady progress (~2% GDP growth).
* Superintelligence is uncertain and likely gradual, not an instant “explosion.”
A self-learning memristor is our closest step yet to recreating synapses in the human brain.
To try Brilliant for free, visit https://brilliant.org/APERTURE/ or scan the QR code onscreen. You’ll also get 20% off an annual premium subscription.
AI experts from all around the world believe that given its current rate of progress, by 2027, we may hit the most dangerous milestone in human history. The point of no return, when AI could stop being a tool and start improving itself beyond our control. A moment when humanity may never catch up.
00:00 The AI Takeover Is Closer Than You Think.
01:05 The rise of AI in text, art & video.
02:00 What is the Technological Singularity?
04:06 AI’s impact on jobs & economy.
05:31 What happens when AI surpasses human intellect.
08:36 AlphaGo vs world champion Lee Sedol.
11:10 Can we really “turn off” AI?
12:12 Narrow AI vs Artificial General Intelligence (AGI)
16:39 AGI (Artificial General Intelligence)
18:01 From AGI to Superintelligence.
20:18 Ethical concerns & defining intelligence.
22:36 Neuralink and human-AI integration.
25:54 Experts warning of 2027 AGI
Support: / apertureyt.
Shop: https://bit.ly/ApertureMerch.
Subscribe: https://bit.ly/SubscribeToAperture.
Discord: / discord.
Questions or concerns? https://underknown.com/contact/
Interested in sponsoring Aperture? [email protected].
#aperture #ai #artificialintelligence #pointofnoreturn #technology #tech #future
Go to https://hensonshaving.com/isaacarthur and enter “Isaac Arthur ” at checkout to get 100 free blades with your purchase.
What happens after intelligence explodes beyond human comprehension? We explore a world shaped by superintelligence, where humanity may ascend, adapt — or disappear.
Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE
Credits:
After the Singularity — What Life Would Be Like If A Technological Singularity Happened?
Written, Produced & Narrated by: Isaac Arthur.
Editors: Lukas Konecny.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.
Chapters.
0:00 Intro.
3:36 Is the Singularity Inevitable? The Case for Limits and Roadblocks.
8:42 Scenarios After the Singularity.
9:15 Scenario One: The AI Utopia.
10:31 Scenario Two: Digital Heaven.
11:57 Scenario Three: The AI Wasteland.
13:10 Scenario Four: The Hybrid Civilization.
14:48 What Does the Singularity Mean for Us?
16:31 Humanity’s Response: Resistance, Adaptation, or Surrender.
20:22 PRecision.
21:45 The Limits of Superintelligence: Why Even Godlike Minds Might Struggle.
25:48 Humanity’s Role in a Post-Singularity Future.
29:06 The Fermi Paradox and the Silent Singularity.
31:10 Reflections in Pop Culture and History.
32:27 Writing the Future.
Questions to inspire discussion.
🇦🇺 Q: How was Tesla’s FSD supervised launch received in Australia? A: Tesla’s FSD supervised launch in Australia received fair coverage from mainstream media, including a 4.5-minute segment on national news, without Tesla paying for advertising.
🚘 Q: What are the key features of Tesla’s FSD supervised system? A: Tesla’s FSD supervised system uses cameras and advanced software to autonomously accelerate, brake, and steer, but requires the driver to be responsible and ready to take control at any time.
FSD Safety Concerns.
⚠️ Q: What safety issues have been reported with Tesla’s FSD supervised system? A: Tesla’s FSD supervised system has been involved in multiple accidents overseas, but in most cases, the driver was distracted and tried to blame the car, highlighting the need for drivers to take full responsibility.
🇺🇸 Q: What legal challenges has Tesla faced with FSD in the US and Canada? A: Tesla’s FSD supervised system has been slapped with lawsuits in the US and Canada due to multiple crashes, with Tesla stating that in most cases, the driver was distracted and not using the system properly.
However, if you’re rich and you don’t like the idea of a limit on computing, you can turn to futurism, longtermism, or “AI optimism,” depending on your favorite flavor. People in these camps believe in developing AI as fast as possible so we can (they claim) keep guardrails in place that will prevent AI from going rogue or becoming evil. (Today, people can’t seem to—or don’t want to—control whether or not their chatbots become racist, are “sensual” with children, or induce psychosis in the general population, but sure.)
The goal of these AI boosters is known as artificial general intelligence, or AGI. They theorize, or even hope for, an AI so powerful that it thinks like… well… a human mind whose ability is enhanced by a billion computers. If someone ever does develop an AGI that surpasses human intelligence, that moment is known as the AI singularity. (There are other, unrelated singularities in physics.) AI optimists want to accelerate the singularity and usher in this “godlike” AGI.
One of the key facts of computer logic is that, if you can slow the processes down enough and look at it in enough detail, you can track and predict every single thing that a program will do. Algorithms (and not the opaque AI kind) guide everything within a computer. Over the decades, experts have written the exact ways information can be sent, one bit—one minuscule electrical zap—at a time through a central processing unit (CPU).
Part 1 of the Singularity Series was “Putting Brakes on the Singularity.” That essay looked at how economic and other non-technical factors will slow down the practical effects of AI, and we should question the supposedly immediate move from AGI to SAI (superintelligent AI).
In part 3, I will consider past singularities, different paces for singularities, and the difference between intelligence and speed accelerations.
In part 4, I will follow up by offering alternative models of AI-driven progress.
Use code sabine at https://incogni.com/sabine to get an exclusive 60% off an annual Incogni plan.
According to Einstein’s theories, the universe started with a Big Bang singularity and is slowly expanding until it disperses into nothingness. But physicists have also come up with theories claiming that the Big Bang was non-singular and can repeat, restarting the cycle over again. These are called “cyclic models,” and they’ve re-emerged into the spotlight now that there’s mounting evidence that dark energy is weakening over time. However, a physicist from UC Berkeley recently published a paper which he claims “categorically rules out” cyclic models. Let’s take a look.
🤓 Check out my new quiz app ➜ http://quizwithit.com/
📚 Buy my book ➜ https://amzn.to/3HSAWJW
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ https://www.patreon.com/Sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsletter/
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXlKnMPEUMEeKQYmYC
🔗 Join this channel to get access to perks ➜
https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join.
#science #sciencenews #physics #singularity