Menu

Blog

Apr 7, 2023

Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Posted by in category: robotics/AI

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky.
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Follow me on Twitter: https://twitter.com/dwarkesh_sp.

Timestamps:
(0:00:00) — TIME article.
(0:09:06) — Are humans aligned?
(0:37:35) — Large language models.
(1:07:15) — Can AIs help with alignment?
(1:30:17) — Society’s response to AI
(1:44:42) — Predictions (or lack thereof)
(1:56:55) — Being Eliezer.
(2:13:06) — Othogonality.
(2:35:00) — Could alignment be easier than we think?
(3:02:15) — What will AIs want?
(3:43:54) — Writing fiction & whether rationality helps you win.

Comments are closed.