Toggle light / dark theme

In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems—“superintelligences” more capable than humans—might one day take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence,” but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky.

He added: “People aren’t exploring anti-Bredt olefins because they think they can’t.”

“We shouldn’t have rules like this—or if we have them, they should only exist with the constant reminder that they’re guidelines, not rules.”

He added: “It destroys creativity when we have rules that supposedly can’t be overcome.”