Toggle light / dark theme

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

I am not in fact talking about the delightful Deus Ex game, but rather about the actual revolution in society and technology we are witnessing today. Pretty much every day I look at any news source, be it on cable news networks or facebook feeds or whathaveyou, I always see fear mongering. “Implantable chips will let the government track you!” or “Hackers will soon be able to steal your thoughts!” (Seriously, seen both of these and much more and much crazier.) …But I’m here to tell you two things. First, calm the hell down. Nearly every doomsday scenario painted by fear-mongering assholes is either impossible or so utterly unlikely as to be effectively impossible. And second… that you should psych the hell up because its actually extremely exciting and worth getting excited about. But for good reasons, not bad.

Read more

An article for the “Dooms Day” fans.


An asteroid roughly 100 feet long and moving at more than 34,000 mph is scheduled to make a close pass by Earth in two weeks.

But don’t worry, scientists say. It has no chance of hitting us, and may instead help draw public attention to growing efforts at tracking the thousands of asteroids zooming around space that could one day wipe out a city — or worse — if they ever hit our planet.

This one, known as 2016 TX68, is larger than an 18-wheel tractor trailer truck, and is expected to fly as close as 19,245 miles to Earth at 4:06 pm Pacific time on Monday, March 7. For comparison, that’s less than one-tenth as far as the moon is from Earth, or 238,900 miles.

An article on transhumanism in the Huff Post:


2016-02-05-1454642218-44797-futurecity.jpg
Future Transhumanist City — Image by Sam Howzit

Transhumanism–the international movement that aims to use science and technology to improve the human being–has been growing quickly in the last few years. Everywhere one looks, there seems to be more and more people embracing radical technology that is already dramatically changing lives. Ideas that seemed science fiction just a decade ago are now here.

Later this year, I’ll be speaking at RAAD, a one-of-a-kind life extension and transhumanism festival in San Diego where thought-leaders like Ray Kurzweil, Dr. Aubrey de Grey, and Dr. Joseph Mercola will be sharing their ideas on our future. With so much radical tech growth and science innovation occurring in the last few years, the question has been asked: What are the best strategies for the transhumanism movement moving forward? Of course, as the 2016 US Presidential candidate of the Transhumanist Party, I have my own ideas–and naturally they’re quite politically oriented.

Very well thought out, quite intelligent points.


A post-apocalyptic Earth, emptied of humans, seems like the stuff of science fiction TV and movies. But in this short, surprising talk, Lord Martin Rees asks us to think about our real existential risks — natural and human-made threats that could wipe out humanity. As a concerned member of the human race, he asks: What’s the worst thing that could possibly happen?

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.

Find closed captions and translated subtitles in many languages at http://www.ted.com/translate

Follow TED news on Twitter: http://www.twitter.com/tednews
Like TED on Facebook: https://www.facebook.com/TED

Read more

Yuste v. Hawkins — battle of the brains.


Renowned neuroscientist Rafael Yuste on Wednesday dismissed the latest doomsday predictions of Stephen Hawking, saying the British astrophysicist “doesn’t know what he’s talking about.”

In a recent lecture in London, Hawking indicated that advances in science and technology will lead to “new ways things can go wrong,” especially in the field of artificial intelligence.

Yuste, a Columbia University neuroscience professor, was less pessimistic. “We don’t have enough knowledge to be able to say such things,” he told Radio Cooperativa in Santiago, Chile.

Read more

Yeah, he’s turned into quite the man-of-panic as of late.


Stephen Hawking is at it again, saying it’s a “near certainty” that a self-inflicted disaster will befall humanity within the next thousand years or so. It’s not the first time the world’s most famous physicist has raised the alarm on the apocalypse, and he’s starting to become a real downer. Here are some of the other times Hawking has said the end is nigh—and why he needs to start changing his message.

Speaking to the Radio Times recently ahead of his BBC Reith Lecture, Hawking said that ongoing developments in science and technology are poised to create “new ways things can go wrong.” The scientist pointed to nuclear war, global warming, and genetically-engineering viruses as some of the most serious culprits.

“Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years,” he was quoted as saying. “By that time we should have spread out into space, and to other stars, so it would not mean the end of the human race. However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”

Wow!!! Chewing gum wearable technology, Cyborg Chips, Ingestible sensors to let doctors know if you’re taking your meds, etc. 2016 is going to be interesting


The phrase “Brave New World” has become one of the most often used clichés in medical technology in recent years. Google the title of Aldous Huxley’s 1932 dystopian, and anticipatory, novel with the word medicine and 2,940,000 results appear.

But could there be better shorthand to describe some of the recent developments in medical, health and bio-tech? Consider these possibilities coming to fruition, or close to, in 2016:

1. Back from Extinction