Toggle light / dark theme

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

To be clear, humans are not the pinnacle of evolution. We are confronted with difficult choices and cannot sustain our current trajectory. No rational person can expect the human population to continue its parabolic growth of the last 200 years, along with an ever-increasing rate of natural resource extraction. This is socio-economically unsustainable. While space colonization might offer temporary relief, it won’t resolve the underlying issues. If we are to preserve our blue planet and ensure the survival and flourishing of our human-machine civilization, humans must merge with synthetic intelligence, transcend our biological limitations, and eventually evolve into superintelligent beings, independent of material substrates—advanced informational beings, or ‘infomorphs.’ In time, we will shed the human condition and upload humanity into a meticulously engineered inner cosmos of our own creation.

Much like the origin of the Universe, the nature of consciousness may appear to be a philosophical enigma that remains perpetually elusive within the current scientific paradigm. However, I emphasize the term “current.” These issues are not beyond the reach of alternative investigative methods, ones that the next scientific paradigm will inevitably incorporate with the arrival of Artificial Superintelligence.

The era of traditional, human-centric theoretical modeling and problem-solving—developing hypotheses, uncovering principles, and validating them through deduction, logic, and repeatable experimentation—may be nearing the end. A confluence of factors—Big Data, algorithms, and computational resources—are steering us towards a new type of discovery, one that transcends the limitations of human-like logic and decision-making— the one driven solely by AI superintelligence, nestled in quantum neo-empiricism and a fluidity of solutions. These novel scientific methodologies may encompass, but are not limited to, computing supercomplex abstractions, creating simulated realities, and manipulating matter-energy and the space-time continuum itself.

Earlier this month, Reddit published a Public Content Policy stating: Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.

In its blog post on Thursday, Reddit said that deals like OpenAI’s are part of an open Internet. It added that part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.

Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow’s user community responded by sabotaging their own posts.

OpenAI just announced new changes to ChatGPT’s data analysis feature. Users can now create customizable, interactive tables and charts with the AI chatbot’s help that they can later download for presentations and documents. They can also upload files to ChatGPT from Google Drive and Microsoft OneDrive.

However, not all ChatGPT users will gain access to the new data analysis features. The upgrade will roll out for ChatGPT Plus, Team, and Enterprise users over the coming weeks. The new data analysis capabilities will be available in GPT-4o, OpenAI’s new flagship model recently released as part of the company’s Spring Update.

Also: ChatGPT vs. ChatGPT Plus: Is a paid subscription still worth it?