Review of “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” (2025), by Eliezer Yudkowsky and Nate Soares, with very critical commentary.
Yudkowsky and Soares present a stark warning about the dangers of developing artificial superintelligence (ASI), defined as artificial intelligence (AI) that vastly exceeds human intelligence. The authors argue that creating such AI using current techniques would almost certainly lead to human extinction and emphasize that ASI poses an existential threat to humanity. They argue that the race to build smarter-than-human AI is not an arms race but a “suicide race,” driven by competition and optimism that ignores fundamental risks.