{"id":23733,"date":"2016-03-18T16:37:45","date_gmt":"2016-03-18T23:37:45","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/?p=23733"},"modified":"2016-03-18T16:37:45","modified_gmt":"2016-03-18T23:37:45","slug":"whos-afraid-of-existential-risk-or-why-its-time-to-bring-the-cold-war-out-of-the-cold","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2016\/03\/whos-afraid-of-existential-risk-or-why-its-time-to-bring-the-cold-war-out-of-the-cold","title":{"rendered":"Who\u2019s Afraid of Existential Risk? Or, Why It\u2019s Time to Bring the Cold War out of the Cold"},"content":{"rendered":"<p>At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan \u2013 in the guise of an ongoing US presidential bid \u2014 to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism\u2019s image in the \u2018serious\u2019 mainstream media, which is currently dominated by Nick Bostrom\u2019s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don\u2019t introduce enough security measures.<\/p>\n<p>Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of \u2018existential risks\u2019, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.<\/p>\n<p>The idea of \u2018existential risk\u2019 capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It\u2019s a bit like Pascal\u2019s wager, whereby the potentially negative consequences of you not believing in God \u2013 to wit, eternal damnation \u2014 rationally compels you to believe in God, despite your instinctive doubts about the deity\u2019s existence.<\/p>\n<p>However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we\u2019re not so powerful as to create a \u2018weapon of mass destruction\u2019, however defined, that could annihilate all of humanity; on the other, we\u2019re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether \u2018existential risk\u2019 is really the high concept that it is cracked up to be. I don\u2019t believe it is.<\/p>\n<p>In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed \u2018thinking the unthinkable\u2019. What he had in mind was the aftermath of a thermonuclear war in which, say, 25\u201350% of the world\u2019s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from \u2018the worst case scenarios\u2019 proposed nowadays, even under conditions of severe global warming. Kahn\u2019s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.<\/p>\n<p>And indeed, we did largely follow Kahn\u2019s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of \u2018ahead of the curve\u2019 thinking is characteristic of military-based innovation generally. Warfare focuses minds on what\u2019s dispensable and what\u2019s necessary to preserve \u2013 and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that \u2018necessity is the mother of invention\u2019. Once again, and most importantly, we win even \u2013 and especially \u2013 if Doomsday never happens.<\/p>\n<p>An interesting economic precedent for this general line of thought, which I have associated with transhumanism\u2019s \u2018proactionary principle\u2019, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called \u2018the relative advantage of backwardness\u2019. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The \u2018learning\u2019 amounts to innovating more efficient means of achieving and often surpassing the predecessors\u2019 level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of \u2018backwardness\u2019 on a global scale vis-\u00e0-vis the pre-catastrophic humanity.<\/p>\n<p>Doomsday scenarios invariably invite discussions of our species\u2019 \u2018resilience\u2019 and \u2018adaptability\u2019, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between \u2018reliable\u2019 and \u2018maintainable\u2019 artefacts. Reliable artefacts tend to be \u2018overdesigned\u2019, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be \u2018underdesigned\u2019, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.<\/p>\n<p>In a sense, \u2018resilience\u2019 and \u2018adaptability\u2019 could be identified with either position, but the Cold War\u2019s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios \u2013 including the likely negative ones \u2014 that we couldn\u2019t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld\u2019s game-theoretic formulation, we need to address the \u2018unknown unknowns\u2019, not merely the \u2018known unknowns\u2019. Good candidates for the relevant \u2018unknown unknowns\u2019 are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences \u2014 call them \u2018emergent\u2019, if you wish.<\/p>\n<p>It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their \u2018negativity\u2019: What would be potentially lost in the various scenarios which would be vital to sustain the \u2018human condition\u2019, however defined? The answers would provide the basis for future innovation policy \u2013 namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don\u2019t come to pass, nevertheless they will make our normal lives better \u2013 as has been the long-term effect of the Cold War.<\/p>\n<p> <\/p>\n<p><strong>References<\/strong><\/p>\n<p>Bleed, P. (1986). \u2018The optimal design of hunting weapons: Maintainability or reliability?\u2019 <em>American Antiquity <\/em>51: 737\u2013 47.<\/p>\n<p>Bostrom, N. (2014). <em>Superintelligence.<\/em> Oxford: Oxford University Press.<\/p>\n<p>Fuller, S. and Lipinska, V. (2014). <em>The Proactionary Imperative<\/em>. London: Palgrave (pp. 35\u201336).<\/p>\n<p>Gerschenkron, A. (1962). <em>Economic Backwardness in Historical Perspective<\/em>. Cambridge MA: Harvard University Press.<\/p>\n<p>Kahn, H. (1960). <em>On Thermonuclear War.<\/em> Princeton: Princeton University Press.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan \u2013 in the guise of an ongoing US presidential bid \u2014 to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of [\u2026]<\/p>\n","protected":false},"author":299,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14,1528,39,12,1759,1522,9,225,31,6,1760,1966,1501],"tags":[1702,2419,2418,1701,2415,2416,2417],"class_list":["post-23733","post","type-post","status-publish","format-standard","hentry","category-defense","category-disruptive-technology","category-economics","category-existential-risks","category-governance","category-innovation","category-military","category-philosophy","category-policy","category-robotics-ai","category-strategy","category-theory","category-transhumanism-2","tag-cold-war","tag-doomsday","tag-economic-backwardness","tag-herman-kahn","tag-nick-bostrom","tag-precautionary-principle","tag-proactionary-principle"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/23733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/299"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=23733"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/23733\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=23733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=23733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=23733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}