Are you worried that artificial intelligence and humans will go to war? AI experts are. In 2023, a group of elite thinkers signed onto the Center for AI Safety’s statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In a survey published in 2024, 38% to 51% of top-tier AI researchers assigned a probability of at least 10% to the statement “advanced AI leading to outcomes as bad as human extinction.”
The worry is not about the Large Language Models (LLMs) of today, which are essentially huge autocomplete machines, but about Advanced General Intelligence (AGI)—still hypothetical long-term planning agents that can substitute for human labor across a wide range of society’s economic systems.