Toggle light / dark theme

Unlike me, Kurzweil has been embracing AI for decades. In his 2005 book, The Singularity Is Near: When Humans Transcend Biology, Kurzweil made the bold prediction that AI would expand human intelligence exponentially, changing life as we know it. He wasn’t wrong. Now in his 70s, Kurzweil is upping the ante in his newest book, The Singularity Is Nearer: When We Merge with AI, revisiting his prediction of the melding of human and machine, with 20 additional years of data showing the exponential rate of technological advancement. It’s a fascinating look at the future and the hope for a better world.

Kurzweil has long been recognized as a great thinker. The son of a musician father and visual artist mother, he grew up in New York City and at a young age became enamored with computers, writing his first computer program at the age of 15.

While at MIT, earning a degree in computer science and literature, Kurzweil started a company that created a computer program to match high school students with colleges. In the ensuing years, he went on to found (and sell) multiple technology-fueled companies and inventions, including the first reading machine for the blind and the first music synthesizer capable of re-creating the grand piano and other orchestral instruments (inspired by meeting Stevie Wonder). He has authored 11 books.

As well as normal hardback and paperback editions, it was released as a free e-book under the CC BY-NC-ND license. Accelerando won the Locus Award in 2006, and was nominated for several other awards in 2005 and 2006, including the Hugo, Campbell, Clarke, and British Science Fiction Association Awards.

The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity. It was originally written as a series of novelettes and novellas, all published in Asimov’s Science Fiction magazine in the period 2001 to 2004. According to Stross, the initial inspiration for the stories was his experience working as a programmer for a high-growth company during the dot-com boom of the 1990s.

The first three stories follow the character of agalmic \.

It is with sadness — and deep appreciation of my friend and colleague — that I must report the passing of Vernor Vinge.


The technological singularity —or simply the singularity[1] —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.

Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.

So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.