Toggle light / dark theme

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

By Stephen F. DeAngelis, Enterra Solutions — Wired

algorithmia-ft

“Algorithm” is a word that one hears used much more frequently than in the past. One of the reasons is that scientists have learned that computers can learn on their own if given a few simple instructions. That’s really all that algorithms are mathematical instructions. Wikipedia states that an algorithm “is a step-by-step procedure for calculations.

Algorithms are used for calculation, data processing, and automated reasoning.” Whether you are aware of it or not, algorithms are becoming a ubiquitous part of our lives. Some pundits see danger in this trend. For example, Leo Hickman (@LeoHickman) writes, “The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives. So should we be more wary of their power?” [“How algorithms rule the world,” The Guardian, 1 July 2013] It’s a bit hyperbolic to declare that algorithms rule the world; but, I agree that their use is becoming more widespread. That’s because computers are playing increasingly important roles in so many aspects of our lives. I like the HowStuffWorks explanation:

“To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then ‘executes’ the program, following each step mechanically, to accomplish the end goal. When you are telling the computer what to do, you also get to choose how it’s going to do it. That’s where computer algorithms come in. The algorithm is the basic technique used to get the job done.”

Read more

By John Frank Weaver — Slate

152766339-google-self-driving-car-is-displayed-at-the-google

Not that long ago, Google announced something unheard of in the auto industry—at least in the part of the auto industry that makes moving cars. A car without a steering wheel or gas and brake pedals. To Google, this was the next step in self-driving cars. Why bother with a steering wheel if the driver isn’t driving? Some observers questioned whether this feature in the proposed the test vehicle violated the autonomous vehicle statute in California (where the vehicle would be tested), which required that the driver take control of the self-driving vehicle in case the autonomous system malfunctions. Google claimed that it installed an on/off button, which satisfied the California law.

California recently weighed in: Google, you’re wrong. The state has released regulations requiring that a test driver be able to take “active physical control” of the car, meaning with a steering wheel and brakes.

To this I say—good for you, California.

Read more

By — Slate

140910_FT_Superintelligence

n the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival.

As it turns out, both of these views are wrong. We have little reason to believe a superintelligence will necessarily share human values, and no reason to believe it would place intrinsic value on its own survival either. These arguments make the mistake of anthropomorphising artificial intelligence, projecting human emotions onto an entity that is fundamentally alien.

Read more

Adam Gell — HITC

Destiny UK City Space Age Comparison London After

Destiny has been in players’ hands for the past few days now, and I’ve also been doing my part to fight The Darkness this week too. But, as the game uses our own galaxy as the setting to tell its story, complete with futuristic space travel and talk of a Golden Age brought by the arrival of The Traveller, Activision and Bungie have worked with the National Space Centre to see what the UK could look like in the future when space travel is real.

Similarly in the images below, Destiny lets you travel to the futuristic imaginings of the Russian Cosmodrome, which is the real-world site of Earth’s first and largest space facility, and where Sputnik 1 (the first artificial Earth satellite) was launched in 1957. In the game the site looks quite different from today’s real-world counterpart, as humanity has gone through a Golden Age of space travel, and reached the brink of extinction with the arrival of The Darkness years later.

Read more

It needs an effort dwarfing all past peace-time and war-time efforts to be launched immediately, which prospect appears almost infinitely unlikely to be met in time.

The outbreak has long surpassed the threshold of instability and can only be spatially contained any more by the formation of uninfected (A) areas as large as possible and infected areas (B) as small as still possible. Water, food, gowns and disinfectants must be provided by international teams immediately in exponentially growing numbers and for whole countries. A supportive industry must be set in motion in a planet-wide action.

Diseased_Ebola_2014

The bleak prospect that the quenching of the disease is close to a point of no return stems from chaos theory which is essentially a theory of exponential growth (of differences in the initial conditions). “Exponential growth” means that a level that has been reached – in terms of the number of infected persons in the present case – will double after a constant number of time units for a long time to come. Here, we have an empirical doubling every 3 weeks for 5 months in a row by now with no abating in sight. See the precise graphs at the end of: http://en.wikipedia.org/wiki/Ebola_virus_epidemic_in_West_Africa

Written By: — Sigularity Hub

neuro-modulation

Brain implants here we come.

DARPA just announced the ElectRX program, a $78.9 million attempt to develop miniscule electronic devices that interface directly with the nervous system in the hopes of curing a bunch of chronic conditions, ranging from the psychological (depression, PTSD) to the physical (Crohn’s, arthritis). Of course, the big goal here is to usher in a revolution in neuromodulation—that is, the science of modulating the nervous system to fix an underlying problem.

Read more

Caleb Chen — Cryptocoinsnews

bitcoin hacker blackmail

The Celebgate incident is still unfolding, and we are already seeing mainstream media report on the connection between the “nude celebrity photos” and Bitcoin. The original leaker of the pictures returned to 4chan recently to respond to the uproar he had caused. An excerpt from his post highlights Bitcoin’s involvement in Celebgate:

People wanted **** for free. Sure, I got $120 with my bitcoin address, but when you consider how much time was put into acquiring this stuff (i’m not the hacker, just a collector), and the money (i paid a lot via Bitcoin as well to get certain sets when this stuff was being privately traded Friday/Saturday) I really didn’t get close to what I was hoping.

The leaker used bitcoins to purchase the nude photos of 100+ celebrities from the hacker. The leaker went on to explain that as he was posting he started noticing tell-tale signs that his computer actions were being watched and tracked. He further claimed that his “ISP kept cutting out” and that there were “Weird emails coming in..” The FBI is currently investigating both the leaker and the hacker, whom might have used an iCloud exploit.

Read more