{"id":189713,"date":"2024-05-19T11:24:58","date_gmt":"2024-05-19T16:24:58","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/05\/superintelligence-paths-dangers-strategies"},"modified":"2024-05-19T11:24:58","modified_gmt":"2024-05-19T16:24:58","slug":"superintelligence-paths-dangers-strategies","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/05\/superintelligence-paths-dangers-strategies","title":{"rendered":"Superintelligence: Paths, Dangers, Strategies"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/superintelligence-paths-dangers-strategies.jpg\"><\/a><\/p>\n<p>Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for \u2018AI\u2019 from late 2022, which reached a record high in February 2024.<\/p>\n<p>You would therefore be forgiven for thinking that AI is suddenly and only recently a \u2018big thing.\u2019 Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.<sup>1<\/sup> Since its beginning, a meandering trajectory of technical successes and \u2018AI winters\u2019 subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today\u2019s public conscience.<\/p>\n<p>Alongside those who aim to develop transformational AI as quickly as possible \u2013 the so-called \u2018Effective Accelerationism\u2019 movement, or \u2018e\/acc\u2019 \u2013 exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI \u2013 the \u2018decels\u2019 and \u2018doomers.\u2019<sup>2<\/sup> One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,<sup>3<\/sup> anthropic reasoning,<sup>4<\/sup> the simulation argument,<sup>5<\/sup> and existential risk.<sup>6<\/sup> I first read his 2014 book <em>Superintelligence: Paths, Dangers, Strategies<\/em><sup>7<\/sup> five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a \u2018superintelligence\u2019) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,<sup>8<\/sup> uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),<sup>9<\/sup> and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today\u2019s digital technology landscape.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for \u2018AI\u2019 from late 2022, which reached a record high in February 2024. You would therefore be [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,30,12,6],"tags":[],"class_list":["post-189713","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-ethics","category-existential-risks","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/189713","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=189713"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/189713\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=189713"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=189713"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=189713"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}