In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems—“superintelligences” more capable than humans—might one day take over the world and destroy humanity.
A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence,” but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal.
What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky.
Leave a reply