A debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy.
“What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky and Mark Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues that such a pause would make an ASI singleton more likely – which he sees as the greatest danger.”
What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.
They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.
Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.
The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats.
Links.
Visit our website: https://foresight.org.
Our secure AI grants: https://foresight.org/grants/secure-a… book Gaming the Future: https://foresight.org/gaming-the-futu… Our next AI safety-focused workshop (relevant until Oct 10, 2025): https://foresight.org/events/2025-neu… Vision Weekend US 2025, our biggest upcoming event: https://foresight.org/events/vision-w… Apply to our secure AI online seminar group: https://foresight.org/seminar/secure–… MIRI’s website: https://intelligence.org The paper Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic: https://arxiv.org/abs/1401.5577 Agoric Open Systems papers: https://papers.agoric.com/papers/ ══════════════════════════════════════ About Foresight Institute Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1986 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support. From molecular nanotechnology, to brain-computer interfaces, space exploration, cryptocommerce, and AI, Foresight gathers leading minds to advance research and accelerate progress toward flourishing futures. We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page: https://foresight.org/donate/ Visit https://foresight.org, subscribe to our channel for more videos or join us here: • Twitter: / foresightinst • LinkedIn:
/ foresight-institute • Facebook:
/ foresightinst ══════════════════════════════════════ Timecodes 00:00:00 Video highlights 00:02:11 Introduction by Christine Peterson 00:06:11 Understanding the risks: engineered plagues and cybersecurity 00:09:29 Debating AI lethalities and civilization collapse 00:15:07 Exploring the nature of intelligence 00:30:06 Human organizations and social intelligence 00:45:50 Coordination and institutional innovation 00:51:12 Evolutionary history and cooperation 00:52:32 Human organizations vs. individual intelligence 00:53:20 Kasparov vs. the world experiment 00:55:52 Emergent intelligence in human society 01:06:52 Superintelligence and human civilization 01:14:32 Theoretical and practical risks of superintelligence 01:24:59 Coordination and market prices 01:39:44 Progress update and singleton discussion 01:42:08 Humanity’s place in the future 01:44:18 Delaying superintelligence 01:47:58 Alignment problems and superintelligence 01:54:21 Predictability and decision theory 01:57:47 Coalition dynamics and human exclusion 02:14:28 Negotiating leverage and coalition membership 02:29:27 Fair division and operating systems 02:32:30 Introduction to computer security challenges 02:33:52 sel4 operating system and security proofs 02:36:18 Strategies for plausible security 02:41:08 Partitioning concerns and rights 02:43:55 Decision theory and AI coordination 02:47:31 Challenges in AI alignment and governance 02:55:43 Superintelligence and operating systems 03:08:13 Verifying AI actions and intentions 03:23:45 Incremental progress in AI safety 03:26:16 Cryptographic separation and blockchain 03:29:24 The risks of malicious AI in drug discovery 03:36:02 The debate on AI alignment and control 03:55:37 The nuclear analogy and AI proliferation 04:07:04 Concluding thoughts on AI safety.
The book Gaming the Future: https://foresight.org/gaming-the-futu…
Our next AI safety-focused workshop (relevant until Oct 10, 2025): https://foresight.org/events/2025-neu…
Vision Weekend US 2025, our biggest upcoming event: https://foresight.org/events/vision-w…
Apply to our secure AI online seminar group: https://foresight.org/seminar/secure–…
MIRI’s website: https://intelligence.org.
The paper Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic: https://arxiv.org/abs/1401.5577
Agoric Open Systems papers: https://papers.agoric.com/papers/
About Foresight Institute.
Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1986 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support. From molecular nanotechnology, to brain-computer interfaces, space exploration, cryptocommerce, and AI, Foresight gathers leading minds to advance research and accelerate progress toward flourishing futures.