Toggle light / dark theme

Accelerando: Accelerando is a 2005 science fiction novel consisting of a series of interconnected short stories written by British author Charles Stross

As well as normal hardback and paperback editions, it was released as a free e-book under the CC BY-NC-ND license. Accelerando won the Locus Award in 2006, and was nominated for several other awards in 2005 and 2006, including the Hugo, Campbell, Clarke, and British Science Fiction Association Awards.

The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity. It was originally written as a series of novelettes and novellas, all published in Asimov’s Science Fiction magazine in the period 2001 to 2004. According to Stross, the initial inspiration for the stories was his experience working as a programmer for a high-growth company during the dot-com boom of the 1990s.

The first three stories follow the character of agalmic \.

Technological singularity

It is with sadness — and deep appreciation of my friend and colleague — that I must report the passing of Vernor Vinge.


The technological singularity —or simply the singularity[1] —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7].

The Political Singularity and a Worthy Successor, with Daniel Faggella

Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.

Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.

So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.

AGI in 3 to 8 years

When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence… the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?

According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.

We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.

We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet’s evolution, and envisages the role of humans in a world where AI surpasses human intelligence.

00:00 Introduction to the Future of AI
01:28 Predicting the Timeline of Artificial General Intelligence.
02:06 The Role of LLMs in the Path to AGI
05:23 The Impact of AI on Jobs and Economy.
06:43 The Future of AI Development.
10:35 The Role of Humans in a World with AGI
35:10 The Diverse Future of Human and Post-Human Minds.
36:51 The Challenges of Transitioning to a World with AGI
39:34 Conclusion: The Future of AGI.

/* */